article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
we consider a wireless communication network where users are able to harvest energy from the nature using rechargeable batteries .such energy harvesting capabilities will make sustainable and environmentally friendly deployment of wireless communication networks possible .while energy - efficient scheduling policies have been well - investigated in traditional battery powered ( un - rechargeable ) systems , energy - efficient scheduling in energy harvesting networks with nodes that have rechargeable batteries has only recently been considered .references consider a single - user communication system with an energy harvesting transmitter , and develop a packet scheduling scheme that minimizes the time by which all of the packets are delivered to the receiver . in this paper, we consider a multi - user extension of the work in .in particular , we consider a wireless broadcast channel with an energy harvesting transmitter . as shown in fig .[ fig : bc ] , we consider a broadcast channel with one transmitter and two receivers , where the transmitter node has three queues .the data queues store the data arrivals intended for the individual receivers , while the energy queue stores the energy harvested from the environment .our objective is to adaptively change the transmission rates that go to both users according to the instantaneous data and energy queue sizes , such that the total _ transmission completion time _ is minimized . in this paper , we focus on finding the optimum _ off - line _ schedule , by assuming that the energy arrival profile at the transmitter is known ahead of time in an off - line manner , i.e. , the energy harvesting times and the corresponding harvested energy amounts are known at time .we assume that there are a total of bits that need to be delivered to receiver 1 , and bits that need to be delivered to receiver 2 , available at the transmitter at time . as shown in fig .[ fig : bc_system ] , energy arrives ( is harvested ) at points in time marked with ; in particular , denotes the amount of energy harvested at time .our goal is to develop a method of transmission to minimize the time , , by which all of the data packets are delivered to their respective receivers .the optimal packet scheduling problem in a single - user energy harvesting communication system is investigated in . in , we prove that the optimal scheduling policy has a `` majorization '' structure , in that , the transmit power is kept constant between energy harvests , the sequence of transmit powers increases monotonically , and only changes at some of the energy harvesting instances ; when the transmit power changes , the energy constraint is tight , i.e. , at the times when the transmit power changes , the total consumed energy equals the total harvested energy . in , we develop an algorithm to obtain the optimal off - line scheduling policy based on these properties .reference extends to the case where rechargeable batteries have finite sizes .we extend in to a fading channel . references investigate two related problems .the first problem is to maximize the throughput ( number of bits transmitted ) with a given deadline constraint , and the second problem is to minimize the transmission completion time with a given number of bits to transmit .these two problems are `` dual '' to each other in the sense that , with a given energy arrival profile , if the maximum number of bits that can be sent by a deadline is in the first problem , then the minimum time to transmit bits in the second problem must be the deadline in the first problem , and the optimal transmission policies for these two problems must be identical . in this paper, we will follow this `` dual problems '' approach .we will first attack and solve the first problem to determine the structural properties of the optimal solution .we will then utilize these structural properties to develop an iterative algorithm for the second problem .our iterative approach has the goal of reducing the two - user broadcast problem into a single - user problem as much as possible , and utilizing the single - user solution in .the second problem is also considered in the independent work which uses convex optimization techniques to reduce the problem into local sub - problems that consider only two energy arrival epochs at a time .we first analyze the structural properties of the optimal policy for the first problem where our goal is to maximize the number of bits delivered to both users under a given deadline constraint . to that end , we first determine the _ maximum departure region _ with a given deadline constraint .the maximum departure region is defined as the set of all that can be transmitted to users reliably with a given deadline . in order to do that, we consider the problem of maximizing under the energy causality constraints for the transmitter , for all . varying , traces the boundary of the maximum departure region .we prove that the optimal _ total _ transmit power policy is independent of the values of , , and it has the same `` majorization '' structure as the single - user non - fading solution . as for the way of splitting the total transmit power between the two users , we prove that there exists a _ cut - off _ power level for the stronger user , i.e. , only the power above this _ cut - off _ power level is allocated to the weaker user .we then consider the second problem , where our goal is to minimize the time , , by which a given number of bits are delivered to their intended receivers .as discussed , since the second problem is `` dual '' to the first problem , the optimal transmission policy in this problem has the same structural properties as in the first problem .therefore , in the second problem as well , there exists a _ cut - off _ power level .the problem then becomes that of finding an optimal _ cut - off _ power such that the transmission times for both users become identical and minimized . with these optimal structural properties , we develop an iterative algorithm that finds the optimal schedule efficiently .in particular , we first use the fact that the optimum total transmit power has the same structural properties as the single - user problem , to obtain the first optimal total power , , i.e. , the optimal total power in the first epoch .then , given the fact that there exists a _ cut - off _ power level , , for the stronger user , the optimal transmit strategy depends on whether is smaller or larger than , which , at this point , is unknown .therefore , we have two cases to consider .if is smaller than , then the stronger user will always have a constant , , portion of the total transmit power .this reduces the problem to a single - user problem for the second user , together with a fixed - point equation in a single variable ( ) to be solved to ensure that the transmissions to both users end at the same time . on the other hand ,if is larger than , this means that all of must be spent to transmit to the first ( stronger ) user . in this case, the number of bits delivered to the first user in this time duration can be subtracted from the total number of bits to be delivered to the first user , and the problem can be started anew with the updated number of bits after the first epoch .therefore , in both cases , the broadcast channel problem is essentially reduced to single - user problems , and the approach in is utilized recursively to solve the overall problem .the system model is as shown in figs .[ fig : bc ] and [ fig : bc_system ] .the transmitter has an energy queue and two data queues ( fig .[ fig : bc ] ) .the physical layer is modeled as an awgn broadcast channel , where the received signals at the first and second receivers are where is the transmit signal , and is a gaussian noise with zero - mean and unit - variance , and is a gaussian noise with zero - mean and variance , where .therefore , the second user is the _ degraded _ ( weaker ) user in our broadcast channel . assuming that the transmitter transmits with power , the capacity region for this two - user awgn broadcast channel is where is the fraction of the total power spent for the message transmitted to the first user .let us denote for future use .then , the capacity region is , .this capacity region is shown in fig .[ fig : bc_capacity ] . working on the boundary of the capacity region, we have as shown in fig .[ fig : bc ] , the transmitter has bits to transmit to the first user , and bits to transmit to the second user .energy is harvested at times with amounts .our goal is to select a transmission policy that minimizes the time , , by which all of the bits are delivered to their intended receivers .the transmitter adapts its transmit power and the portions of the total transmit power used to transmit signals to the two users according to the available energy level and the remaining number of bits .the energy consumed must satisfy the causality constraints , i.e. , at any given time , the total amount of energy consumed up to time must be less than or equal to the total amount of energy harvested up to time . before we proceed to give a formal definition of the optimization problem and propose the solution ,we start with the `` dual '' problem of this transmission completion time minimization problem , i.e. , instead of trying to find the minimal , we aim to identify the maximum number of bits the transmitter can deliver to both users by any fixed time . as we will observe in the next section , solving the `` dual '' problem enables us to identify the optimal structural properties for both problems , and these properties eventually help us reduce the original problem into simple scenarios , which can be solved efficiently .in this section , our goal is to characterize the maximum departure region for a given deadline . we define it as follows . for any fixed transmission duration , the maximum departure region , denoted as , is the union of under any feasible rate allocation policy over the duration , i.e. , , subject to the energy constraint , for .we call any policy which achieves the boundary of to be optimal . in the single - user scenario in , we first examined the structural properties of the optimal policy .based on these properties , we developed an algorithm to find the optimal scheduling policy . in this broadcast scenario also , we will first analyze the structural properties of the optimal policy , and then obtain the optimal solution based on these structural properties .the following lemma which was proved for a single - user problem in was also proved for the broadcast problem in .[ lemma : const ] under the optimal policy , the transmission rate remains constant between energy harvests , i.e. , the rate only potentially changes at an energy harvesting epoch .we prove this using the strict convexity of .if the transmission rate for any user changes between two energy harvesting epochs , then , we can always equalize the transmission rate over that duration without contradicting with the energy constraints . based on the convexity of , after equalization of rates , the energy consumed over that duration decreases , andthe saved energy can be allocated to both users to increase the departures .therefore , changing rates between energy harvests is sub - optimal .therefore , in the following , we only consider policies where the rates are constant between any two consecutive energy arrivals .we denote the rates that go to both users as over the duration . with this property , an illustration of the maximum departure regionis shown in fig .[ fig : trajectory ] . is a convex region .proving the convexity of is equivalent to proving that , given any two achievable points and in , any point on the line between these two points is also achievable , i.e. , in .assume that and can be achieved with rate allocation policies and , respectively .consider the policy , where . then, the energy consumed up to is therefore , the energy causality constraint is satisfied for any ] or it is higher than .assume .therefore , the transmission completion time for the first ( stronger ) user is once is fixed , we can obtain the minimum transmission completion time for the second user , , by subtracting the energy consumed by the first user , and treating as an interference for the second user .this reduces the problem to the single - user problem for the second user with fading , where the fading level is over , and afterwards .the single - user problem with fading is studied in . since obtaining the minimal transmission completion timeis not as straightforward for the fading channel , a more approachable way is to calculate the maximum number of bits departed from the second user by , denoted as . in order to do that, we first identify the optimal power allocation policy with fixed deadline .this can be done according to lemma [ lemma4 ] .assume that the optimal power allocation gives us .then , we allocate to the first user over the whole duration , and allocate the remaining power to the second user . based on ( [ eqn : rate2 ] ), we calculate the transmission rate for the second user over each duration , and obtain according to we observe that , given , is a monotonically increasing function of .moreover , given , is a monotonically decreasing function of .if is smaller than , it implies that , and we need to decrease the rate for the first user to make and equal .based on lemma [ lemma : cutoff ] , this also implies that the transmission power for the first user is a constant .in particular , is the unique solution of note that is a continuous , strictly monotonically decreasing function of , hence the solution for in ( [ eq ] ) is unique .since is a decreasing function of and is a decreasing function of , we can use the bisection method to solve ( [ eq ] ) . in this case , the minimum transmission completion time is . if is larger than , that implies , and we need to increase the power allocated for the first user to make and equal , i.e. , . therefore , from lemma [ lemma : cutoff ] , over the duration , the optimal policy is to allocate the entire to the first user only .we allocate to the first user , calculate the number of bits departed for the first user , and remove them from .this simply reduces the problem to that of transmitting bits starting at time , where .the process is illustrated in fig .[ fig : search_pc ] .then , the minimum transmission completion time is where is the number of recursions needed to get . in both scenarios ,we reduce the problem into a simple form , and obtain the final optimal policy . before we proceed to prove the optimality of the algorithm, we introduce the following lemma first , which is useful in the proof of the optimality of the algorithm .[ lemma : mono ] monotonically increases in ; monotonically increases in also .the monotonicity of both functions can be verified by taking derivatives , and where the inequality follows since .therefore , is a strictly concave function , and its first derivative monotonically decreases when increases .since when , when , we have , therefore , the monotonicity follows . similarly , we have and again , the concavity implies that the first derivative is positive when , and the monotonicity follows .the algorithm is feasible and optimal .we first prove the optimality . in order to prove that the algorithm is optimal, we need to prove that is optimal .once we prove the optimality of , the optimality of , , follows .since the solution obtained using our algorithm always has the optimal structure described in lemma [ lemma : cutoff ] , the optimality of the power allocation also implies the optimality of the rate selection , thus , the optimality of the algorithm follows .therefore , in the following , we prove that is optimal .first , we note that is the minimal slope up to .we need to prove that is also the minimal slope up to the final transmission completion time , .let us define as follows assume that with , we allocate to the first user , and finish using constant rates. then , we allocate to the first user , and the rest to the second user .based on lemma [ lemma : mono ] , we have therefore , is an upper bound for the optimal transmission completion time . since is the minimal slope up to ,we conclude that is optimal throughout the transmission . following similar arguments, we can prove the optimality of the rest of the power allocations .this completes the proof of optimality . in order to prove that the allocation is feasible , we need to show that the power allocation for the first user is always feasible in each step . therefore , in the following, we first prove that is feasible when we assume that .the feasibility of also implies the feasibility of the rest of the power allocation . with the assumption that , the final transmission time for the first user is based on ( [ eqn : b1 ] ) and ( [ eqn : b2 ] ) , we know that .since is feasible up to , therefore , is feasible when we assume that .the feasibility of the rest of the power allocations follows in a similar way .this completes the feasibility part of the proof .we consider a band - limited awgn broadcast channel , with bandwidth mhz and the noise power spectral density w / hz . we assume that the path loss between the transmitter and the first receiver is about db , and the path loss between the transmitter and the second user is about db .then , we have therefore , for the energy harvesting process , we assume that at times ] mj . we find the maximum departure region for s , and plot them in fig . [ fig : bc_region ] .we observe that the maximum departure region is convex for each value of , and as increases , the maximum departure region monotonically expands .then , we aim to minimize the transmission completion time with mbits . following our algorithm, we obtain the optimal transmission policy , which is shown in fig . [fig : bc_example1 ] .we note that the powers change only potentially at instances when energy arrives ( lemma [ lemma : const ] ) ; power sequence is monotonically increasing and `` majorized '' over the whole transmission duration ( lemma [ lemma4 ] ) .we also note that , for this case , the first user transmits at a constant rate , and the rate for the second user monotonically increases .the transmitter finishes its transmissions to both users by time s , and the last energy harvest at time s is not used .next , we consider the example when mbits , we have the optimal transmission policy , as shown in fig .[ fig : bc_example2 ] . in this example , the cut - off power is greater than , and therefore , is allocated to the first user only over s , and after s , the first user keeps transmitting at a constant rate until all bits are transmitted . in this case , the transmission rates for both users monotonically increase .the transmitter finishes its transmissions by time s , and the last energy harvest is not used .we investigated the transmission completion time minimization problem in an energy harvesting broadcast channel .we first analyzed the structural properties of the optimal transmission policy , and proved that the optimal total transmit power has the same structure as in the single - user channel .we also proved that there exists a _ cut - off _ power for the stronger user .if the optimal total transmit power is lower than this cut - off level , all power is allocated to the stronger user , and when the optimal total transmit power is greater than this cut - off level , all power above this level is allocated to the weaker user .based on these structural properties of the optimal policy , we developed an iterative algorithm to obtain the globally optimal off - line transmission policy .a. el gamal , c. nair , b. prabhakar , e. uysal - biyikoglu , and s. zahedi , `` energy - efficient scheduling of packet transmissions over wireless networks , '' _ ieee infocom _ , vol . 3 , pp .17731782 , november 2002 . | in this paper , we investigate the _ transmission completion time _ minimization problem in a two - user additive white gaussian noise ( awgn ) broadcast channel , where the transmitter is able to harvest energy from the nature , using a rechargeable battery . the harvested energy is modeled to arrive at the transmitter randomly during the course of transmissions . the transmitter has a fixed number of packets to be delivered to each receiver . our goal is to minimize the time by which all of the packets for both users are delivered to their respective destinations . to this end , we optimize the transmit powers and transmission rates intended for both users . we first analyze the structural properties of the optimal transmission policy . we prove that the optimal _ total _ transmit power has the same structure as the optimal single - user transmit power . we also prove that there exists a _ cut - off _ power level for the stronger user . if the optimal total transmit power is lower than this cut - off level , all transmit power is allocated to the stronger user , and when the optimal total transmit power is larger than this cut - off level , all transmit power above this level is allocated to the weaker user . based on these structural properties of the optimal policy , we propose an algorithm that yields the globally optimal off - line scheduling policy . our algorithm is based on the idea of reducing the two - user broadcast channel problem into a single - user problem as much as possible . energy harvesting , rechargeable wireless networks , broadcast channels , transmission completion time minimization , throughput maximization . |
authors : : f. d. witherden , a. m. farrington , p. e. vincent program title : : pyfr v0.1.0 licensing provisions : : new style bsd license programming language : : python , cuda and c computer : : variable , up to and including gpu clusters operating system : : recent version of linux / unix ram : : variable , from hundreds of megabytes to gigabytes number of processors used : : variable , code is multi - gpu and multi - cpu aware through a combination of mpi and openmp external routines / libraries : : python 2.7 , numpy , pycuda , mpi4py , sympy , mako nature of problem : : compressible euler and navier - stokes equations of fluid dynamics ; potential for any advection - diffusion type problem . solution method: : high - order flux reconstruction approach suitable for curved , mixed , unstructured grids .unusual features : : code makes extensive use of symbolic manipulation and run - time code generation through a domain specific language . running time : : many small problems can be solved on a recent workstation in minutes to hours .throughout we adopt a convention in which dummy indices on the right hand side of an expression are summed . for example where the limits are implied from the surrounding context .all indices are assumed to be zero - based. 2 = + kronecker delta + matrix determinant + matrix dimensions + + * indices . *+ element type + element number + field variable number + summation indices + summation indices + + * domains .* + solution domain + all elements in of type + a _ standard _ element of type + boundary of + @@-curtab@ element of type in + number of elements of type + + * expansions . *+ polynomial order + number of spatial dimensions + number of field variables + @@-curtab@ nodal basis polynomial for element type + physical coordinates + transformed coordinates + transformed to physical mapping + + * adornments and suffixes .* + @@-curtab@ a quantity in transformed space + @@-curtab@ a vector quantity of unit magnitude + transpose + a quantity at a solution point + a quantity at a flux point + @@-curtab@ a normal quantity at a flux point + + * operators . * + @@-curtab@ common solution at an interface + @@-curtab@ common normal flux at an interface + +there is an increasing desire amongst industrial practitioners of computational fluid dynamics ( cfd ) to undertake high - fidelity scale - resolving simulations of transient compressible flows within the vicinity of complex geometries .for example , to improve the design of next generation unmanned aerial vehicles ( uavs ) , there exists a need to perform simulations at reynolds numbers and mach numbers highly separated flow over deployed spoilers / air - brakes ; separated flow within serpentine intake ducts ; acoustic loading in weapons bays ; and flow over entire uav configurations at off - design conditions .unfortunately , current generation industry - standard cfd software based on first- or second - order accurate reynolds averaged navier - stokes ( rans ) approaches is not well suited to performing such simulations .henceforth , there has been significant interest in the potential of high - order accurate methods for unstructured mixed grids , and whether they can offer an efficient route to performing scale - resolving simulations within the vicinity of complex geometries .popular examples of high - order schemes for unstructured mixed grids include the discontinuous galerkin ( dg ) method , first introduced by reed and hill , and the spectral difference ( sd ) methods originally proposed under the moniker ` staggered - gird chebyshev multidomain methods ' by kopriva and kolias in 1996 and later popularised by sun et al . . in 2007huynh proposed the flux reconstruction ( fr ) approach ; a unifying framework for high - order schemes for unstructured grids that incorporates both the nodal dg schemes of and , at least for a linear flux function , any sd scheme .in addition to offering high - order accuracy on unstructured mixed grids , fr schemes are also compact in space , and thus when combined with explicit time marching offer a significant degree of element locality . as such , explicit high - order fr schemes are characterised by a large degree of structured computation . over the past two decades improvements in the arithmetic capabilities of processorshave significantly outpaced advances in random access memory .algorithms which have traditionally been compute bound such as dense matrix - vector products are now limited instead by the bandwidth to / from memory .this is epitomised in . whereas the processors of two decades ago had flops - per - byte of more recent chips have ratios upwards of .this disparity is not limited to just conventional cpus .massively parallel accelerators and co - processors such as the nvidia k20x and intel xeon phi 5110p have ratios of and , respectively .trends in the peak floating point performance ( double precision ) and memory bandwidth of sever - class intel processors from 19942013 .the quotient of these two measures yields the flops - per - byte of a processor .data courtesy of jan treibig . ]a concomitant of this disparity is that modern hardware architectures are highly dependent on a combination of high speed caches and/or shared memory to maintain throughput .however , for an algorithm to utilise these efficiently its memory access pattern must exhibit a degree of either spatial or temporal locality . to a first - order approximationthe spatial locality of a method is inversely proportional to the amount of memory indirection . on an unstructured grid indirectionarises whenever there is coupling between elements .this is potentially a problem for discretisations whose stencil is not compact .coupling also arises in the context of implicit time stepping schemes .implementations are therefore very often bound by memory bandwidth . as a secondary trendwe note that the manner in which flops are realised has also changed . in the early 1990s commodity cpuswere predominantly scalar with a single core of execution . however in 2013 processors with eight or more cores are not uncommon .moreover , the cores on modern processors almost always contain vector processing units .vector lengths up to 256-bits , which permit up to four double precision values to be operated on at once , are not uncommon .it is therefore imperative that compute - bound algorithms are amenable to both multithreading and vectorisation .a versatile means of accomplishing this is by breaking the computation down into multiple , necessarily independent , streams . by virtue of their independencethese streams can be readily divided up between cores and vector lanes .this leads directly to the concept of _ stream processing_. we will refer to architectures amenable to this form of parallelisation as streaming architectures . a corollary of the above discussion is that compute intensive discretisations which can be formulated within the stream processing paradigm are well suited to acceleration on current and likely future hardware platforms .the fr approach combined with explicit time stepping is an archetypical of this .our objective in this paper is to present pyfr , an open - source python based framework for solving advection - diffusion type problems on streaming architectures using the fr approach .the framework is designed to solve a range of governing systems on mixed unstructured grids containing various element types .it is also designed to target a range of hardware platforms via use of an in - built domain specific language derived from the mako templating engine .the current release of pyfr is able to solve the compressible euler and navier - stokes equations on unstructured grids of quadrilateral and triangular elements in two - dimensions , and unstructured grids of hexahedral elements in three - dimensions , targeting clusters of cpus , and nvidia gpus .the paper is structured as follows . inwe provide a overview of the fr approach for advection - diffusion type problems on mixed unstructured grids . inwe proceed to describe our implementation strategy , and in we present the euler and navier - stokes equations , which are solved by the current release of pyfr .the framework is then validated in , single - node performance is discussed in , and scalability of the code is demonstrated on up to 104 nvidia m2090 gpus in . finally , conclusions are drawn in .a brief overview of the fr approach for solving advection - diffusion type problems is given below .extended presentations can be found elsewhere .consider the following advection - diffusion problem inside an arbitrary domain in dimensions where is the _ field variable _ index , is a conserved quantity , is the flux of this conserved quantity and . in defining the flux we have taken in its unscripted form to refer to all of the field variables and to be an object of length consisting of the gradient of each field variable .we start by rewriting as a first order system where is an auxiliary variable . here , as with ,we have taken in its unsubscripted form to refer to the gradients of all of the field variables .take to be the set of available element types in .examples include quadrilaterals and triangles in two dimensions and hexahedra , prisms , pyramids and tetrahedra in three dimensions .consider using these various elements types to construct a conformal mesh of the domain such that where refers to all of the elements of type inside of the domain , is the number of elements of this type in the decomposition , and is an index running over these elements with . inside each element we require that it is convenient , for reasons of both mathematical simplicity and computational efficiency , to work in a transformed space . we accomplish this by introducing , for each element type , a standard element which exists in a transformed space , .next , assume the existence of a mapping function for each element such that along with the relevant jacobian matrices these definitions provide us with a means of transforming quantities to and from standard element space .taking the transformed solution , flux , and gradients inside each element to be and letting , it can be readily verified that as required .we note here the decision to multiply the first equation through by a factor of .doing so has the effect of taking which allows us to work in terms of the physical solution .this is more convenient from a computational standpoint .we next proceed to associate a set of solution points with each standard element . for each type take to be the chosen set of points where .these points can then be used to construct a nodal basis set with the property that . to obtain such a set we first take to be any basis which spans a selected order polynomial space defined inside .next we compute the elements of the generalised vandermonde matrix . with thesea nodal basis set can be constructed as . along with the solution points inside of each elementwe also define a set of flux points on .we denote the flux points for a particular element type as where .let the set of corresponding normalised outward - pointing normal vectors be given by .it is critical that each flux point pair along an interface share the same coordinates in physical space . for a pair of flux points and at a non - periodic interfacethis can be formalised as .a pictorial illustration of this can be seen in .solution points ( blue circles ) and flux points ( orange squares ) for a triangle and quadrangle in physical space . for the top edge of the quadranglethe normal vectors have been plotted .observe how the flux points at the interface between the two elements are co - located . ]the first step in the fr approach is to go from the discontinuous solution at the solution points to the discontinuous solution at the flux points where is an approximate solution of field variable inside of the element of type at solution point .this can then be used to compute a _ common solution _ where is a scalar function that given two values at a point returns a common value . herewe have taken to be the element type , flux point number and element number of the adjoining point at the interface .since grids in fr are permitted to be unstructured the relationship between and is indirect .this necessitates the use of a lookup table . as the common solution functionis permitted to perform upwinding or downwinding of the solution it is in general the case that .hence , it is important that each flux point pair only be visited _ once _ with the same common solution value assigned to both and .further , associated with each flux point is a vector correction function constrained such that with a divergence that sits in the same polynomial space as the solution . using these fields we can express the solution to as {\tilde{{\mathbf{x } } }= \tilde{{\mathbf{x}}}^{(u)}_{e\sigma}},\ ] ] where the term inside the curly brackets is the ` jump ' at the interface and the final term is an order approximation of the gradient obtained by differentiating the discontinuous solution polynomial . following the approaches of kopriva and sun et al . we can now compute physical gradients as where .having solved the auxiliary equation we are now able to evaluate the transformed flux where .this can be seen to be a collocation projection of the flux . with thisit is possible to compute the normal transformed flux at each of the flux points considering the physical normals at the flux points we see that which is the outward facing normal vector in physical space where is defined as the magnitude . as the interfaces between two elements conform we must have . with these definitions we are now in a position to specify an expression for the _ common normal flux _ at a flux point pair as the relationship arises from the desire for the resulting numerical scheme to be conservative ; a net outward flux from one element must be balanced by a corresponding inward flux on the adjoining element .it follows that that .the common normal fluxes in can now be taken into transformed space via where .it is now possible to compute an approximation for the divergence of the _ continuous _ flux .the procedure is directly analogous to the one used to calculate the transformed gradient in {\tilde{{\mathbf{x } } } = \tilde{{\mathbf{x}}}^{(u)}_{e\rho}},\ ] ] which can then be used to obtain a semi - discretised form of the governing system where .this semi - discretised form is simply a system of ordinary differential equations in and can be solved using one of a number of schemes , e.g. a classical fourth order runge - kutta ( rk4 ) scheme .pyfr is a python based implementation of the fr approach described in section .it is designed to be compact , efficient , and platform portable .key functionality is summarised in table ..[tab : pyfr - func]key functionality of pyfr . [ cols= "> , < " , ]in this paper we have described pyfr , an open source python based framework for solving advection - diffusion type problems on streaming architectures .the structure and ethos of pyfr has been explained including our methodology for targeting multiple hardware platforms .we have shown that pyfr exhibits spatial super accuracy when solving the 2d euler equations and the expected order of accuracy when solving couette flow problem on a range of grids in 2d and 3d .qualitative results for unsteady 3d viscous flow problems on curved grids have also been presented .performance of pyfr has been validated on an nvidia m2090 gpu in three dimensions .it has been shown that the compute bound kernels are able to obtain between and of realisable peak flop / s whereas the bandwidth bound point - wise kernels are , across the board , able to obtain in excess of realisable peak bandwidth .the scalability of pyfr has been demonstrated in the strong sense up to 32 nvidia m2090s and in the weak sense up to 104 nvidia m2090s when solving the 3d navier - stokes equations .the authors would like to thank the engineering and physical sciences research council for their support via two doctoral training grants and an early career fellowship ( ep / k027379/1 ) .the authors would also like to thank the e - infrastructure south centre for innovation for granting access to the emerald supercomputer , and nvidia for donation of three k20c gpus .it is possible to cast the majority of operations in an fr step as matrix - matrix multiplications of the form where are constants , is a constant operator matrix , and and are state matrices . to accomplish this we start by introducing the following constant operator matrix and the following state matrices in specifying the state matrices there is a degree of freedom associated with how the field variables for each element are packed along a row of the matrix , with the possible packing choices being discussed in . using these matrices we are able to reformulate as in order to apply a similar procedure to we let {\tilde{{\mathbf{x } } } = \tilde{{\mathbf{x}}}^{(u)}_{e\sigma } } , & \dim { \bm{\mathsf{m}}}^{4}_{e } , & = n_{d}n_{e}^{(u ) } \times n_{e}^{(u)},\\ \big({\bm{\mathsf{m}}}^{6}_{e}\big)_{\rho\sigma } & = \big[\hat{\tilde{{\mathbf{n}}}}^{(f)}_{e\rho}\cdot \tilde{{\bm{\nabla } } } \cdot { \mathbf{g}}^{(f)}_{e\rho}(\tilde{{\mathbf{x}}})\big]_{\tilde{{\mathbf{x } } } = \tilde{{\mathbf{x}}}^{(f)}_{e\sigma } } , & \dim { \bm{\mathsf{m}}}^{6}_{e } , & = n_{d}n_{e}^{(u ) } \times n_{e}^{f},\\ \big({\bm{\mathsf{c}}}^{(f)}_{e}\big)_{\rho(n\alpha ) } & = \mathfrak{c}_{\alpha}u^{(f)}_{e\rho n\alpha } , & \dim { \bm{\mathsf{c}}}^{(f)}_{e } & = n^{(f)}_{e } \times n_v{\left\lvert{\mathbf{\omega}}_e\right\rvert},\\ \big(\tilde{{\bm{\mathsf{q}}}}^{(u)}_{e}\big)_{\sigma(n\alpha ) } & = \tilde{{\mathbf{q}}}^{(u)}_{e\sigma n\alpha } , & \dim \tilde{{\bm{\mathsf{q}}}}^{(u)}_{e } & = n_dn^{(u)}_{e } \times n_v{\left\lvert{\mathbf{\omega}}_e\right\rvert},\end{aligned}\ ] ] here it is important to qualify assignments of the form where is a component vector . as abovethere is a degree of freedom associated with the packing . with the benefit of foresightwe take the stride between subsequent elements of in a matrix column to be either or depending on the context . with these matricesreduces to applying the procedure to we take hence where we note the block diagonal structure of .this is a direct consequence of the above choices for .finally , to rewrite we write ^t_{\tilde{{\mathbf{x } } } = \tilde{{\mathbf{x}}}^{(u)}_{e\sigma } } , & \dim { \bm{\mathsf{m}}}^{1}_{e } & = n^{(u)}_{e } \times n_dn^{(u)}_{e},\\ \big({\bm{\mathsf{m}}}^{2}_{e}\big)_{\rho\sigma } & = \big[\ell^{(u)}_{e\rho}(\tilde{{\mathbf{x}}}^{(f)}_{e\sigma } ) \hat{\tilde{{\mathbf{n}}}}^{(f)}_{e\sigma}\big]^t , & \dim { \bm{\mathsf{m}}}^{2}_{e } & = n^{(f)}_{e } \times n_dn^{(u)}_{e},\\ \big({\bm{\mathsf{m}}}^{3}_{e}\big)_{\rho\sigma } & = \big[\tilde{{\bm{\nabla } } } \cdot { \mathbf{g}}^{(f)}_{e\sigma}(\tilde{{\mathbf{x}}})\big]_{\tilde{{\mathbf{x } } } = \tilde{{\mathbf{x}}}^{(u)}_{e\rho } } , & \dim { \bm{\mathsf{m}}}^{3}_{e } & = n^{(u)}_{e } \times n^{(f)}_{e},\\ \big(\tilde{{\bm{\mathsf{d}}}}^{(f)}_{e}\big)_{\sigma(n\alpha ) } & = \mathfrak{f}^{\vphantom{(f_\perp)}}_{\alpha}\tilde{f}^{(f_\perp)}_{e\sigma n\alpha } , & \dim \tilde{{\bm{\mathsf{d}}}}^{(f)}_{e } & = n^{(f)}_{e } \times n_v{\left\lvert{\mathbf{\omega}}_e\right\rvert},\\ \big(\tilde{{\bm{\mathsf{f}}}}^{(u)}_{e}\big)_{\rho(n\alpha ) } & = \tilde{{\mathbf{f}}}^{(u)}_{e\rho n\alpha } , & \dim \tilde{{\bm{\mathsf{f}}}}^{(u)}_{e } & = n_dn^{(u)}_{e } \times n_v{\left\lvert{\mathbf{\omega}}_e\right\rvert},\\ \big(\tilde{{\bm{\mathsf{r}}}}^{(u)}_{e}\big)_{\rho(n\alpha ) } & = ( \tilde{{\bm{\nabla } } } \cdot \tilde{{\mathbf{f}}})^{(u)}_{e\rho n\alpha } , & \dim \tilde{{\bm{\mathsf{r}}}}^{(u)}_{e } & = n_e^{(u ) } \times n_v{\left\lvert{\mathbf{\omega}}_e\right\rvert},\end{aligned}\ ] ] and after substitution of for obtain the following section we take and to be the two discontinuous solution states at an interface and to be the normal vector associated with the first state . for conveniencewe take , and with inviscid fluxes being prescribed by .also known as the local lax - friedrichs method a rusanov type riemann solver imposes inviscid numerical interface fluxes according to where is an estimate of the maximum wave speed incorporate boundary conditions into the fr approach we introduce a set of boundary interface types . at a boundary interfacethere is only a single flux point : that which belongs to the element whose edge / face is on the boundary .associated with each boundary type are a pair of functions and where , , and are the solution , solution gradient and unit normals at the relevant flux point .these functions prescribe the common solutions and normal fluxes , respectively . instead of directly imposing solutions and normal fluxesit is oftentimes more convenient for a boundary to instead provide ghost states . in its simplest formulation and where isthe ghost solution state and is the ghost solution gradient .it is straightforward to extend this prescription to allow for the provisioning of different ghost solution states for and and to permit to be a function of in addition to .patrice castonguay , pe vincent , and antony jameson .application of high - order energy stable flux reconstruction schemes to the euler equations . in _49th aiaa aerospace sciences meeting _ , volume 686 , 2011 . | high - order numerical methods for unstructured grids combine the superior accuracy of high - order spectral or finite difference methods with the geometric flexibility of low - order finite volume or finite element schemes . the flux reconstruction ( fr ) approach unifies various high - order schemes for unstructured grids within a single framework . additionally , the fr approach exhibits a significant degree of element locality , and is thus able to run efficiently on modern streaming architectures , such as graphical processing units ( gpus ) . the aforementioned properties of fr mean it offers a promising route to performing affordable , and hence industrially relevant , scale - resolving simulations of hitherto intractable unsteady flows within the vicinity of real - world engineering geometries . in this paper we present pyfr , an open - source python based framework for solving advection - diffusion type problems on streaming architectures using the fr approach . the framework is designed to solve a range of governing systems on mixed unstructured grids containing various element types . it is also designed to target a range of hardware platforms via use of an in - built domain specific language based on the mako templating engine . the current release of pyfr is able to solve the compressible euler and navier - stokes equations on grids of quadrilateral and triangular elements in two dimensions , and hexahedral elements in three dimensions , targeting clusters of cpus , and nvidia gpus . results are presented for various benchmark flow problems , single - node performance is discussed , and scalability of the code is demonstrated on up to 104 nvidia m2090 gpus . the software is freely available under a 3-clause new style bsd license ( see www.pyfr.org ) . _ keywords : _ high - order ; flux reconstruction ; parallel algorithms ; heterogeneous computing |
uncertainty quantification of complex systems mandates stochastic computations of a multivariate output function that depends on , a high - dimensional random input with a positive integer . for practical applications , encountering hundreds of variables or moreis not uncommon , where a function of interest , defined algorithmically via numerical solution of algebraic , differential , or integral equations , is all too often expensive to evaluate .therefore , there is a need to develop low - dimensional approximations of by seeking to exploit the hidden structure potentially lurking underneath a function decomposition .the dimensional decomposition of can be viewed as a finite , hierarchical expansion in terms of its input variables with increasing dimensions , where is a subset with the complementary set and cardinality , and is a -variate component function describing a constant or the cooperative influence of , , a subvector of , on when or .the summation in ( [ 1 ] ) comprises terms , with each term depending on a group of variables indexed by a particular subset of , including the empty set .this decomposition , first presented by hoeffding in relation to his seminal work on -statistics , has been studied by many other researchers : sobol used it for quadrature and analysis of variance ( anova ) ; efron and stein applied it to prove their famous lemma on jackknife variances ; owen presented a continuous space version of the nested anova ; hickernell developed a reproducing kernel hilbert space version ; and rabitz and alis made further refinements , referring to it as high - dimensional model representation ( hdmr ) .more recently , the author s group formulated this decomposition from the perspective of taylor series expansion , solving a number of stochastic - mechanics problems . in a practical setting ,the multivariate function , fortunately , has an effective dimension much lower than , meaning that can be effectively approximated by a sum of lower - dimensional component functions , .given an integer , the truncated dimensional decomposition then represents a general -variate approximation of , which for includes cooperative effects of at most input variables , , on .however , for ( [ 2 ] ) to be useful , one must ask the fundamental question : what is the approximation error committed by for a given ?the answer to this question , however , is neither simple nor unique , because there are multiple ways to construct the component functions , , spawning approximations of distinct qualities .indeed , there exist two important variants of the decomposition : ( 1 ) referential dimensional decomposition ( rdd ) and ( 2 ) anova dimensional decomposition ( add ) , both representing sums of lower - dimensional component functions of . while add has desirable orthogonal properties , the anova component functions are difficult to compute due to the high - dimensional integrals involved .in contrast , the rdd lacks orthogonal features with respect to the probability measure of , but its component functions are much easier to obtain . for rdd , an additional question arises regarding the reference point , which , if improperly selected , can mar the approximation .existing error analysis , limited to the univariate truncation of , reveals that the expected error from the rdd approximation is at least four times larger than the error from the add approximation .although useful to some extent , such result alone is not adequate when evaluating multivariate functions requiring higher - variate interactions of input .no error estimates exist yet in the current literature even for a bivariate approximation .therefore , a more general error analysis pertaining to a general -variate approximation of a multivariate function should provide much - needed insights into the mathematical underpinnings of dimensional decomposition .the purpose of this paper is twofold .firstly , a brief exposition of add and rdd is given in section 2 , including clarifications of parallel developments and synonyms used by various researchers .the error analysis pertaining to the add approximation is described in section 3 .secondly , a direct form of the rdd approximation , previously developed by the author s group , is tapped for providing a vital link to subsequent error analysis .section 4 introduces new formulae for the lower and upper bounds of the expected errors from the bivariate and general -variate rdd approximations .these error bounds , so far available only for the univariate approximation , are used to clarify why add approximations are exceedingly more precise than rdd approximations .there are seven new results stated or proved in this paper : proposition [ p4 ] , theorems [ t6 ] , [ t7 ] , and corollaries [ c2 ] , [ c3 ] , [ c4a ] , [ c4 ] . proofs of other results can be obtained from the references cited , including a longer version of this paper ( http://www.engineering.uiowa.edu/~rahman/moc_longpaper.pdf ) .conclusions are drawn in section 5 .let , , , , and represent the sets of positive integer ( natural ) , non - negative integer , integer , real , and non - negative real numbers , respectively . for , denote by the -dimensional euclidean space and by the -dimensional multi - index space .these standard notations will be used throughout the paper .let be a complete probability space , where is a sample space , is a -field on , and ] of the -variate add approximation matches the exact mean :=\int_{\mathbb{r}^{n}}y(\mathbf{x})f_{\mathbf{x}}(\mathbf{x})d\mathbf{x}=y_{\emptyset , a} ] represents the variance of the _ _zero-__mean add component function , .clearly , the approximate variance in ( [ 12 ] ) approaches the exact variance =\sum_{\emptyset\ne u\subseteq\{1,\cdots , n\}}\sigma_{u}^{2}=\sum_{s=1}^{n}\:\sum_{{\textstyle { \emptyset\ne u\subseteq\{1,\cdots , n\}\atop |u|=s}}}\sigma_{u}^{2},\label{13}\ ] ] the sum of all variance terms , when .a normalized version is often called the global sensitivity index of for .define a mean - squared error :=\int_{\mathbb{r}^{n}}\left[y(\mathbf{x})-\hat{y}_{s , a}(\mathbf{x})\right]^{2}f_{\mathbf{x}}(\mathbf{x})d\mathbf{x}\label{14}\ ] ] committed by the -variate add approximation of . replacing and in ( [ 14 ] ) with the right sides of ( [ 4a ] ) and ( [ 11 ] ) , respectively , and then recognizing propositions [ p1 ] and [ p2 ] yields which completely eliminates the variance terms of that are associated with - and all lower - variate contributions , an attractive property of add . by setting ,the error can be expressed for any truncation of add . among all possible measures, the probability measure endows the add approximation with an error - minimizing property , explained as follows . for a given , the -variate add approximation is optimal in the mean - square sense .[ p4 ] consider a generic -variate approximation of other than the add approximation . since contains only higher than -variate terms and contains at most -variate terms , and are orthogonal , satisfying =0 ] and then calculating the error on average .but , defined here may follow an arbitrary probability law with density ; therefore , selecting the reference point characterized by the probability density is more appropriate , which leads to & : = \int_{\mathbb{r}^{n}}e_{s , r}(\mathbf{c})f_{\mathbf{x}}(\mathbf{c})d\mathbf{c } \\ & = \int_{\mathbb{r}^{2n}}\left[y(\mathbf{x})-\hat{y}_{s , r}(\mathbf{x};\mathbf{c})\right]^{2}f_{\mathbf{x}}(\mathbf{x})f_{\mathbf{x}}(\mathbf{c})d\mathbf{x}d\mathbf{c } , \end{split}\label{29}\ ] ] as the expected value of the rdd error . simplifying ( [ 29 ] ) in terms of the variance components of , as done for the add error in ( [ 15 ] ) , for arbitrary and may appear formidable . here , the _ zero_-variate ( ) , univariate ( ) , and bivariate ( ) approximation errors for arbitrary will be derived first , followed by error analysis for a general -variate approximation . in all cases ,the derivations require using ( 1 ) the relationships , [ 30 ] that exist between add and rdd component functions and approximations and ( 2 ) sobol s formula , for select choices of described in the following subsection .equations ( [ 30a ] ) and ( [ 30b ] ) follow from propositions [ p1 ] , [ p2 ] , and [ p3 ] and definitions of respective component functions in ( [ 4 ] ) and ( [ 8 ] ) , eventually leading to ( [ 30c ] ) .the term in sobol s formula represents a sum of variance terms contributed by the add component functions that belong to .theorems [ t4 ] , [ t5 ] , and [ t6 ] show how the expected errors from the _ zero_-variate , univariate , and bivariate rdd approximations , respectively , depend on the variance components of .let be a random vector with the joint probability density function of the form , where is the marginal probability density function of its coordinate .then the expected error committed by the zero - variate rdd approximation for is =2\sigma^{2},\label{31b}\ ] ] where =\mathbb{e}\left[y^2(\mathbf{x})\right]-y_{\emptyset , a}^2 ] is the variance of the _zero_-mean add component function , .[ t5 ] setting in ( [ 29 ] ) , the expected error from the univariate rdd approximation on expansion is a sum =i_{1,1}+i_{1,2}+i_{1,3}\label{33}\ ] ] of three integrals on , where their first indices represent the univariate approximation .the first integral =y_{\emptyset , a}^{2}+\sigma^{2}=y_{\emptyset , a}^{2}+\sum_{s=1}^{n}\:\sum_{{\textstyle { \emptyset\ne u\subseteq\{1,\cdots , n\}\atop |u|=s}}}\sigma_{u}^{2},\label{35}\ ] ] expressed in terms of the variance components , is independent of . however, the second integral depends on , yielding \\ & = -2\left(y_{\emptyset , a}^{2}+\hat{\sigma}_{1,a}^{2}\right ) \\ & = -2y_{\emptyset , a}^{2}-2{\displaystyle \sum_{{\textstyle { \emptyset\ne u\subseteq\{1,\cdots , n\}\atop |u|=1}}}}\sigma_{u}^{2 } , \end{split}\label{36}\ ] ] where the first , second , and fifth lines are obtained ( 1 ) employing the add - rdd relationships in ( [ 30c ] ) for , ( 2 ) recognizing and to be orthogonal , satisfying =0 ] is the variance of the _zero_-mean add component function , .[ t6 ] setting in ( [ 29 ] ) , the expected error from the bivariate rdd approximation on expansion is another sum =i_{2,1}+i_{2,2}+i_{2,3}\label{40}\ ] ] of three -dimensional integrals where their first indices represent the bivariate approximation . since the first integraldoes not depend on , is the same as .following a similar reasoning employed for the univariate approximation , the second integral = -2\left(y_{\emptyset , a}^{2}+\hat{\sigma}_{2,a}^{2}\right ) \\ % & = -2\left(y_{\emptyset , a}^{2}+\hat{\sigma}_{2,a}^{2}\right)\\ & = -2y_{\emptyset , a}^{2}-2{\displaystyle \sum_{{\textstyle { \emptyset\ne u\subseteq\{1,\cdots , n\}\atop |u|=1}}}}\sigma_{u}^{2}-2{\displaystyle \sum_{{\textstyle { \emptyset\ne u\subseteq\{1,\cdots , n\}\atop |u|=2}}}}\sigma_{u}^{2 } \end{split}\label{43}\ ] ] contains variance terms associated with at most two variables . using the expression of from ( [ 27 ] ) ,the expanded third integral becomes ^{2}f_{\mathbf{x}}(\mathbf{x})f_{\mathbf{x}}(\mathbf{c})d\mathbf{x}d\mathbf{c}\\ & = \int_{\mathbb{r}^{2n}}\biggl\{\biggl[{\displaystyle \sum_{i=1}^{n-1}\sum_{j = i+1}^{n}}y(x_{i},x_{j},\mathbf{c}_{-\{i , j\}})\biggr]^{2}{\displaystyle + ( n-2)^{2}{\displaystyle \biggl[\sum_{i=1}^{n}y(x_{i},\mathbf{c}_{-\{i\}})\biggr]^{2}}}\\ & \;\;\ ; + { \displaystyle \frac{1}{4}(n-1)^{2}(n-2)^{2}}y^{2}(\mathbf{c})\\ & \;\;\ ; -2(n-2)\biggl[{\displaystyle \sum_{i=1}^{n-1}\sum_{j = i+1}^{n}}y(x_{i},x_{j},\mathbf{c}_{-\{i , j\}})\biggr]\biggl[{\displaystyle \sum_{i=1}^{n}}y(x_{i},\mathbf{c}_{-\{i\}})\biggr]\\ & \;\;\ ; -(n-1)(n-2)^{2}\biggl[{\displaystyle \sum_{i=1}^{n}}y(x_{i},\mathbf{c}_{-\{i\}})\biggr]y(\mathbf{c})\\ & \;\;\ ; + ( n-1)(n-2)\biggl[{\displaystyle \sum_{i=1}^{n-1}\sum_{j = i+1}^{n}}y(x_{i},x_{j},\mathbf{c}_{-\{i , j\}})\biggr]y(\mathbf{c})\biggr\ } f_{\mathbf{x}}(\mathbf{x})f_{\mathbf{x}}(\mathbf{c})d\mathbf{x}d\mathbf{c}. \end{split}\label{43b}\ ] ] employing sobol s formula , this time for , , , , , , and , where , , results in producing the generic -variate coefficient for the bivariate approximation . adding all terms in ( [ 42 ] ) , ( [ 43 ] ) , and ( [ 44 ] ) , with the understanding that , yields ( [ 39 ] ) , proving the theorem .the expected error ] and ] , the results of the _ zero_-variate and univariate rdd approximations presented in theorems [ t4 ] and [ t5 ] coincide with those derived by wang . however , the results of the bivariate rdd approximation that is , theorem [ t6 ] are new .theorems [ t5 ] and [ t6 ] demonstrate that on average the error from the univariate rdd approximation eliminates the variance terms associated with the univariate contribution .for the bivariate rdd approximation , the variance portions resulting from the univariate and bivariate terms have been removed as well .the univariate and bivariate add approximations also satisfy this important property .however , the coefficients of higher - variate terms in the rdd errors are larger than unity , implying greater errors from rdd approximations than from add approximations . from corollary [ c1 ] , the _zero_-variate rdd approximation on average commits twice the amount of error as does the _ zero_-variate add approximation . since a _ zero_-variate approximation , whether derived from add or rdd , does not capture the random fluctuations of a stochastic response , the error analysis associated with a _zero_-variate approximation is useless . nonetheless , the _ zero_-variate results are reported here for completeness .corollary [ c2 ] shows that the expected error from the univariate rdd approximation is at least four times larger than the error from the univariate add approximation .in contrast , the expected error from the bivariate rdd approximation can be eight times larger or more than the error from the bivariate add approximation . given a truncation , an add approximation is superior to an rdd approximation .in addition , rdd approximations may perpetrate very large errors at upper bounds when there exist a large number of variables and appropriate conditions .for instance , consider a contrived example involving a function of variables with a finite variance and the following distribution of the variance terms : , , and .then , the errors from the univariate and bivariate add approximations are both equal to , which is negligibly small .in contrast , the error from the univariate rdd approximation reaches , an unacceptably large magnitude already .furthermore , the error from the bivariate rdd approximation jumps to an enormously large value of .more importantly , the results reveal a theoretical possibility for a higher - variate rdd approximation to commit a larger error than a lower - variate rdd approximation an impossible scenario for the add approximation .however , it is unlikely for this odd behavior to be exhibited for realistic functions , where the variances of higher - variate component functions attenuate rapidly or vanish altogether .nonetheless , a caution is warranted when employing rdd approximations for stochastic analysis of high - dimensional systems .[ r1 ] the error analysis presented so far is limited to at most the bivariate approximation . in this subsection ,the approximation error from a general -variate truncation is derived as follows .let be a random vector with the joint probability density function of the form , where is the marginal probability density function of its coordinate .then the expected error committed by the -variate rdd approximation for is ={\displaystyle \sum_{s = s+1}^{n}\left[{\displaystyle 1+\sum_{k=0}^{s}}{\displaystyle \binom{s - s+k-1}{k}^{2}}\binom{s}{s - k}\right]\sum_{{\textstyle { \emptyset\neq u\subseteq\{1,\cdots , n\}\atop |u|=s}}}}\sigma_{u}^{2},\label{47}\ ] ] where ] from the -variate rdd approximation , expressed in terms of the error from the -variate add approximations , are \le\left[{\displaystyle 1+\sum_{k=0}^{s}}\binom{n - s+k-1}{k}^{2}\binom{n}{s - k}\right]e_{s , a},\label{57}\ ] ] , where the coefficients of the lower and upper bounds are obtained from and respectively .[ c3 ] both theorem [ t7 ] and corollary [ c3 ] are new and provide a general result pertaining to rdd error analysis for an arbitrary truncation .the specific results of the _ zero_-variate or univariate or bivariate rdd approximation , derived in the preceding subsection , can be recovered by setting or or in ( [ 47 ] ) through ( [ 59 ] ) . from corollary [ c3 ] ,the expected error from the -variate rdd approximation of a multivariate function is at least times larger than the error from the -variate add approximation .in other words , the ratio of rdd to add errors doubles for each increment of the truncation . consequently , add approximations are exceedingly more precise than rdd approximations at higher - variate truncations .although the relative disadvantage of using rdd over add worsens drastically with the truncation , one hopes that the approximation error is also decreasing with increasing .for instance , given a rate at which decreases with , what can be inferred on how fast and ] , according to ( [ 59d ] ) , does not follow suit for an arbitrary . however , there exists a minimum threshold , say , , when crossed , ] , resulting in the relationship between and .equation ( [ 59e ] ) supports an exact solution of in terms of , expressed employing the lambert w function and can be inverted easily .for instance , when , ( [ 59f ] ) yields , the only real - valued solution of interest . depicted in figure [ f1 ] ( left ) , derived from ( [ 59f ] ) increases monotonically and strikingly close to linearly with for the ranges of the variables examined . using the equalities in ( [ 59c ] ) and ( [ 59d ] ) , figure [ f1 ] ( right ) presents plots of two normalized errors ,/\sigma^2 ] .when the rate parameter is sufficiently low ( _ e.g. _ , ) , the expected rdd error initially rises before falling as becomes larger .the non - monotonic behavior of the rdd error is undesirable , but it vanishes when the rate parameter is sufficiently high ( _ e.g. _ , ) .no such anomaly is found in the add error for any . and ( left ) and normalized rdd and add errors versus for , or ( right ) . ] the expected error ] .therefore , \to 0 $ ] as for .the error analysis presented in this paper pertains to only second - moment characteristics of .similar analyses or definitions aimed at higher - order moments or probability distribution of can be envisioned , but no closed - form solutions and simple expressions are possible . however ,if satisfies the requirements of the chebyshev inequality or its descendants a condition fulfilled by many realistic functions then the results and findings from this work can be effectively exploited for stochastic analysis .see the longer version of the paper for further details .two variants of dimensional decomposition , namely , rdd and add , of a multivariate function , both representing finite sums of lower - dimensional component functions , were studied .the approximations resulting from the truncated rdd and add are explicated , including clarifications of parallel developments and synonyms used by various researchers . for the rdd approximation , a direct form , previously developed by the author s group ,was found to provide a vital link to subsequent error analysis .new theorems were proven about the expected errors from the bivariate and general rdd approximations , so far available only for the univariate rdd approximation , when the reference point is selected randomly .they furnish new formulae for the lower and upper bounds of the expected error committed by an arbitrarily truncated rdd , providing a means to grade rdd against add approximations .the formulae indicate that the expected error from the -variate rdd approximation of a function of variables , where , is at least times larger than the error from the -variate add approximation .consequently , add approximations are exceedingly more precise than rdd approximations at higher - variate truncations .the analysis also finds the rdd approximation to be sub - optimal for an arbitrarily selected reference point , whereas the add approximation always results in minimum error .therefore , rdd approximations should be used with caveat . | the main theme of this paper is error analysis for approximations derived from two variants of dimensional decomposition of a multivariate function : the referential dimensional decomposition ( rdd ) and analysis - of - variance dimensional decomposition ( add ) . new formulae are presented for the lower and upper bounds of the expected errors committed by bivariately and arbitrarily truncated rdd approximations when the reference point is selected randomly , thereby facilitating a means for weighing rdd against add approximations . the formulae reveal that the expected error from the -variate rdd approximation of a function of variables , where , is at least times greater than the error from the -variate add approximation . consequently , add approximations are exceedingly more precise than rdd approximations . the analysis also finds the rdd approximation to be sub - optimal for an arbitrarily selected reference point , whereas the add approximation always results in minimum error . therefore , the rdd approximation should be used with caution . |
this paper presents a new approach to the constructive design of a robust nonlinear dynamic state feedback controller using an integral quadratic constraint ( iqc ) approach .the idea of using a copy of the plant nonlinearity in the controller is used previously in the literature . however in this paper , we apply a new methodology to construct a controller which uses linear state feedback guaranteed cost control and copies of the plant nonlinearities to form a dynamic state feedback robust nonlinear controller .this approach provides robust performance in the case where uncertainties and nonlinearities are present in the plant .consider a class of uncertain nonlinear systems described by the following state equations : where is the state , is the control input , , , , are the uncertainty outputs , , , , are the uncertainty inputs , are the nonlinearity outputs , are the nonlinearity inputs . also , the nonlinearity inputs and outputs are related as follows : where the nonlinear functions satisfy the following generalized monotonicity conditions : \left [ \begin{array}{c } \psi_i(\nu_1)-\psi_i(\nu_2)\\ \nu_1-\nu_2 \end{array}\right]\geq0\ ] ] for all , and .also , are given symmetric matrices representing the monotonicity or global lipschitz conditions ; see .furthermore , we assume that and the uncertainty in the system satisfies the following integral quadratic constraints , ( see ) : m_{1,j } \left [ \begin{array}{c } \xi_{1,j}(t)\\ \zeta_{1,j}(t ) \end{array}\right ] dt + x(0)^t s_{1,j } x(0)\geq 0,\ ] ] for all . here the are given symmetric matrices and the are given positive definite matrices .let us define ;\quad \tilde{u}(t){\stackrel{\triangle}{=}}\left [ \begin{array}{c } u(t ) \\\tilde{\nu}_1(t)\\ \vdots\\ \tilde{\nu}_g(t)\\ \bar{z_1}\\ \vdots\\ \bar{z_g } \end{array}\right].\ ] ] hence the class of controllers considered here are nonlinear dynamic state feedback controllers which contain a copy of the plant nonlinearities ( see fig .[ fig : nonlinear ] ) ; where and also is a controller gain matrix .the following iqcs , which follow from ( [ eqmon1 ] ) , are satisfied : n_i\left [ \begin{array}{c } { \mu}_i-\tilde{\mu}_i\\ \nu_i-\tilde{\nu}_i \end{array}\right]dt\nonumber \\ & + x(0)'\breve{s}_{i,1}x(0)\big\}\geq0;\\ \label{eqiqc3 } e&\big\{\int_0^t[\mu_i \quad \nu_i ] n_i\left [ \begin{array}{c } { \mu}_i\\ \nu_i \end{array}\right]dt+x(0)'\breve{s}_{i,2}x(0)\big\}\geq0;\\ \label{eqiqc4 } e&\big\{\int_0^t[\tilde{\mu}_i \quad \tilde{\nu}_i ] n_i\left [ \begin{array}{c } \tilde{\mu}_i\\ \tilde{\nu}_i \end{array}\right]dt+x(0)'\breve{s}_{i,3}x(0)\big\}\geq0;\end{aligned}\ ] ] for all . herethe , , are any positive definite matrices .now , we first move the controller nonlinearities ( [ eqnon2 ] ) into the plant description and introduce new notation as follows : ;\quad \tilde{b}_{2 } { \stackrel{\triangle}{=}}\left [ \begin{array}{ccc } \bar{b}_2 & 0 & 0\\ 0 & 0 & i \end{array}\right];\\ & \tilde{\xi}_{2,i}=\left [ \begin{array}{c } \mu_i \\ \tilde{\mu}_i \end{array}\right];\quad \tilde{\zeta}_{2,i}=\left [ \begin{array}{c } \nu_i \\ \tilde{\nu}_i \end{array}\right];\quad \tilde{c}_{1,j}=\left [ \begin{array}{ccc } \check{c}_{1,j } & 0 \end{array}\right];\\ & \tilde{b}_{1,j } { \stackrel{\triangle}{=}}\left [ \begin{array}{c } \check{b}_{1,j } \\ 0 \end{array}\right];\quad \tilde{c}_{2,i}=\left [ \begin{array}{ccc } \bar{c}_{1,i } & 0 \\ 0 & 0 \end{array}\right];\\ & \tilde{d}_{2,i}=\left [ \begin{array}{ccc } \bar{d}_{1,i } & 0 \\ 0 & 0 \end{array}\right];\quad \tilde{d}_{1,j}=\left [ \begin{array}{c } \check{d}_{1,j } \\ 0 \end{array}\right ] , \tilde{a}=\left [ \begin{array}{cc } a & 0 \\ 0 & 0 \end{array}\right ] , \end{split}\ ] ] for all and . using the above notation and ( [ eqnx ] ) , a new system can be written as follows : and .also , the iqcs ( [ eqiqc2])([eqiqc4 ] ) for the nonlinear uncertainty terms can be written as follows : \tilde{m}_{i , p } \left [ \begin{array}{c } \tilde{\xi}_{2,i}\\ \tilde{\zeta}_{2,i } \end{array}\right ] dt + x(0)^t \tilde{s}_{i , p } x(0)\geq 0,\ ] ] for , where and are positive definite matrices .we consider the following cost functional associated with the system ( [ eqsystemc1 ] ) : where and are positive - definite symmetric matrices .[ obs1 ] it is observed that the nonlinearities ( [ eqnon1 ] ) and ( [ eqnon2 ] ) satisfy the iqcs ( [ eqiqc2])([eqiqc4 ] ) .hence , it follows that if the linear uncertain system ( [ eqsystemc1 ] ) and ( [ eqiqc ] ) , with the linear controller ( [ eqnestimator ] ) , leads to an upper bound on the cost function ( [ eqnfcost ] ) then the same controller ( [ eqnestimator ] ) will yield the same upper bound for the uncertain system ( [ eqsystem ] ) , ( [ eqnon1 ] ) , ( [ eqiqc1 ] ) and ( [ eqnon2 ] ) .furthermore , it follows from the above discussion that the system ( [ eqsystem ] ) , ( [ eqnon1 ] ) , ( [ eqiqc1 ] ) with controller ( [ eqnestimator ] ) and ( [ eqnon2 ] ) will also lead to the same upper bound on the cost .we first write the iqc ( [ eqiqc ] ) in the following form which is parametrized by a set of multipliers for : \tilde{m}_{i}(\lambda_i ) \left [ \begin{array}{c } \tilde{\xi}_{2,i}\\ \tilde{\zeta}_{2,i } \end{array}\right ] dt + x(0)^t \tilde{s}_{i , p}(\lambda_i ) x(0)\geq 0,\ ] ] where , . furthermore , ] , ] . here are the negative eigenvalues and are the positive eigenvalues of the matrix . now a change in variables is introduced as follows : =t_i \left [ \begin{array}{c } \bar{\xi}_{2,i}(t)\\ \bar{\zeta}_{2,i}(t ) \end{array}\right];\ ] ] =t_i^{-1}\left [ \begin{array}{c } \tilde{\xi}_{2,i}(t)\\ \tilde{\zeta}_{2,i}(t ) \end{array}\right]=\left [ \begin{array}{cc } \tilde{t}_{11 } & \tilde{t}_{12}\\ \tilde{t}_{21 } & \tilde{t}_{22 } \end{array}\right ] \left [ \begin{array}{c } \tilde{\xi}_{2,i}(t)\\ \tilde{\zeta}_{2,i}(t ) \end{array}\right].\ ] ] the iqcs ( [ eqiqcm ] ) for a given can now be modified by incorporating new variables as given below ( from now on we remove the argument ( ) from the equations wherever possible for the sake of brevity ) : \left [ \begin{array}{cc } -\textbf{i } & 0\\ 0 & \textbf{i } \end{array}\right]\left [ \begin{array}{c } \bar{\xi}_{2,i}\\ \bar{\zeta}_{2,i } \end{array}\right]dt + x(0)'\tilde{s}_{2,i}(\lambda)x(0)\geq0 . \end{split}\ ] ] hence , we have and , which imply the following relation : hence , we obtain =\left [ \begin{array}{cc } \tilde{t}_{11}^{-1 } & -\tilde{t}_{11}^{-1}\tilde{t}_{12}\\ \tilde{t}_{21}\tilde{t}_{11}^{-1 } & \tilde{t}_{22}-\tilde{t}_{21}\tilde{t}_{11}^{-1}\tilde{t}_{12 } \end{array}\right ] \left [ \begin{array}{c } \bar{\xi}_{2,i}(t)\\ \tilde{\zeta}_{2,i}(t ) \end{array}\right].\ ] ] substituting for and into ( [ eqsystemc1 ] ) gives the following dynamical system : for all , and where also in order to deal with the terms in ( [ eqc1system ] ) we use standard loop shifting ideas where we require that the following condition is satisfied for all and : for this purpose , we first define the following quantities : by using the definition in ( [ eqphis ] ) , we define the transformed uncertainty inputs and outputs as follows : ;\\ \check{\zeta}_{2,i } & { \stackrel{\triangle}{=}}\bar{\phi}_i^{-1/2 } [ \bar{c}_{2,i } x + \bar{d}_{2,i}\tilde{u } ] . \end{split}\ ] ] hence , ] .the corresponding bound on the cost function is obtained as follows : suppose there exist constants and vectors and such that conditions ( [ eqcondq ] ) , ( [ eqcondd1 ] ) , along with assumption 1 are satisfied . then : 1 . if the controller defined by ( [ eqare ] ) , ( [ eqdefares ] ) , and ( [ eqcontrol2 ] ) is applied to the uncertain system defined by ( [ eqfiqc ] ) , ( [ eqc2system ] ) , ( [ eqfiqc ] ) , then the cost functional ( [ eqnfcost ] ) satisfies the bound .2 . if the controller defined by ( [ eqare ] ) , ( [ eqdefares ] ) , and ( [ eqcontrol2 ] ) is applied to the uncertain system defined by ( [ eqnniqc ] ) , ( [ eqc1system ] ) , then the cost functional ( [ eqnfcost ] ) satisfies the bound .* _ proof : _ * the first part of the theorem follows directly from the main results of [ theorem 5.3.1 ] .the second part of the theorem is a result of condition ( [ eqcondd1 ] ) which allows for the system ( [ eqnniqc ] ) , ( [ eqc1system ] ) to be written in the form ( [ eqfiqc ] ) , ( [ eqc2system ] ) and hence application of the result in [ theorem 5.3.1 ] to this system will result in as noted in observation [ obs1 ] . [ th1 ] suppose there exist constants and vectors and such that conditions ( [ eqcondq ] ) , ( [ eqcondd1 ] ) , along with assumption 1 are satisfied . if the nonlinear controller defined by ( [ eqnestimator ] ) , ( [ eqnon2 ] ) , ( [ eqnstate ] ) is applied to the nonlinear uncertain system defined by ( [ eqsystem ] ) , ( [ eqnon1 ] ) , ( [ eqiqc1 ] ) , then the cost functional ( [ eqnfcost ] ) satisfies the bound .* _ proof : _ * the result directly follows from part ( ii ) of theorem [ th1 ] and the construction of the iqc ( [ eqnniqc ] ) , the system ( [ eqc1system ] ) and the controller ( [ eqnestimator ] ) along with the discussion in observation [ obs1 ] . example of state feedback control of axial compressor surge is considered in and is given as follows : where and are the system states , and is the control input . in order to obtain a nonlinearity which is monotonic and sector bounded , we add a linear function to the nonlinearity .we also add an additional uncertainty satisfying an iqc for robustness purposes .hence , we obtain where . the uncertainty input satisfies the iqc for all .we solve the algebraic riccati equation ( [ eqare ] ) for the steady state stabilizing solution for possible values of and satisfying conditions ( [ eqmon1 ] ) , ( [ eqmon2 ] ) , ( [ eqcondd1 ] ) along with assumption 1 and considering the iqcs ( [ eqiqc1 ] ) , ( [ eqiqc2])-([eqiqc4 ] ) .these values of and $ ] are chosen so that steady state cost bound ( [ eqbound ] ) is minimized .the value of bound on the cost functional ( [ eqnfcost ] ) for an initial condition of and is obtained as for the following values of the parameters : the cost bound obtained using this scheme is lower than the cost bound obtained in for the same example .this is expected as in the state feedback design we assume that all states are available for measurement .the nonlinear system with the nonlinear state feedback controller has also been simulated using the above initial conditions and by assuming .the result of the simulation is presented in fig .[ fig : simulation ] .it is observed that the control system performance is satisfactory. i. r. petersen , `` robust output feedback guaranteed cost control of nonlinear stochastic uncertain systems via an iqc approach , '' _ ieee transactions on automatic control _ , vol .54 , no . 6 , pp .12991304 , 2009 . | this paper presents a systematic approach to the design of a robust dynamic state feedback controller using copies of the plant nonlinearities , which is based on the use of iqcs and minimax lqr control . the approach combines a linear state feedback guaranteed cost controller and copies of the plant nonlinearities to form a robust nonlinear controller . , , |
since its formulation , quantum mechanics ( qm ) has always peaked the interest of individuals with its inherent mystery and its seemingly paradoxical , but experimentally reproducible , results . at the heart of this mysteryis the superposition principle and the measurement problem .the situation has been nicely described by wallace : solutions of the measurement problem are often called ` interpretations of qm ' , the idea being that all such ` interpretations ' agree on the formalism and thus the experimental predictions " , and thus the interpretation of qm in entropic dynamics _ _ _ _ will inevitably be discussed in the context of the measurement problem .ed differs from most physical theories . in most approaches to qm one starts with the formalism and the interpretation is appended to it , almost as an afterthought . in ed one starts with the interpretation , that is , one specifies what the ontic elements of the theory are , and only then one develops a formalism appropriate to describe , predict and control those ontic elements .for instance , to derive non - relativistic qm one starts from the assumption that particles have definite yet unknown positions and then one proceeds to derive ed as a non - dissipative diffusion .the solution to the measurement problem should address two problems : one is the problem of _ definite outcomes _ ( when and how does the wavefunction collapses ) , the other is the problem of the _ preferred basis _ , which is also known as the _ basis degeneracy problem_. ( for a review see . ) both problems as well as the interpretation of qm in ed will be addressed in the body of this paper .entropic dynamics ( ed ) derives laws of physics as an example entropic inference .this view is extremely constraining .for example , there is no room for quantum probabilities probabilities are neither classical nor quantum , they are tools for reasoning that have universal applicability . related to this is the fact that once one claims that the wavefunction is an epistemic object ( _ i.e. _ , is a probability ) then its time evolution the updating of is not at all arbitrary. the laws of dynamics must be dictated by the usual rules of inference .there is one feature of the unified bayesian / entropic inference approach that turns out to be particularly relevant to the foundations of quantum theory .the issue in question is von neumann s proposal of two modes of evolution for the wavefunction , either continuous and unitary as given by the schrdinger equation , or discontinuous and stochastic as described by the projection postulate the collapse of the wavefunction .once one adopts the broader inference scheme that recognizes the consistency of bayesian updating with entropic updating the apparent conflict between the two modes of wavefunction evolution disappears : the wavefunction is an epistemic object meant to be updated on the basis of new information .when the system is isolated the evolution is obtained by a continuous entropic updating which leads to a unitary evolution described by the schrdinger equation .when , in a measurement , the information is in the form of discrete data the bayesian update is discontinuous and the wavefunction collapses .both forms of updating coexist harmoniously within the framework of ed . in the von neumann measurement scheme ,the macroscopic measurement device is treated quantum mechanically .the initial state of the measuring device is generally given as in indicating the device is in the ready or reference state .an interaction between the system , in an initial state and the measuring device leads to a state in which the system becomes entangled with the device spointer variable , the standard von neumann measurement scheme involves interactions such that the system and measurement device evolve into a special entagled state called a biorthogonal state ( _ i.e. _ , there are no cross terms in eq.([vn a ] ) , for ) schlosshauer .thus , a measurement that finds the pointer variable in state seemingly allows the observer to infer that the system is in state .however , the right hand side of eq.([vn a ] ) can be expanded in some other basis .there is no ontological matter of fact about wihich outcome has been obtained .thus we have problems : not only we have a never observed phenomenon of macroscopic entanglement but we also have a _ problem of no definite outcomes _ also called the _ preferred basis problem _ or the _ basis degeneracy problem _ . in edthe collapse of the wave function involves a straightforward application of bayes theorem to find , the probability the system is in state given a detection of the pointer variable in state . in ed it is possible and convenient to introduce observables other than position ( _ e.g. _ , momentum , energy , and so on ) but these are not attributes of the system , they are attributes of their probability distributions .while positions are ontic elements , these other observables are purely epistemic ; they are properties of the wave function , not of the system .these ideas can be pushed to an extreme when discussing the notion of the weak value - of an operator in which the system is prepared in an initial state and post - selected in state , values are complex and therefore strictly they can not be observed , and yet they may still be _inferred_. from the perspective of an inference framework such as ed the common reference to ` observables ' is misguided .it should be replaced by bell s term ` beables ' for ontic elements such as particle positions , and perhaps ` inferables ' for those epistemic elements associated to probability distributions .after a brief review of ed and of the simplest or direct type of measurement , we discuss the less direct types of measurement the von neumann and weak measurements in which information about the system is inferred by observation of another system , the pointer device , to which it has been suitably correlated .finally we discuss weak values in the context of ed .here we give a brief overview of ed emphasizing those ideas relevant to the measurement problem .( for a review with more extended references see ed . )the system consists of particles living in a flat euclidean space with metric .the positions of the particles have _ definite _ values which are however _ unknown_. we already see that ed differs from the copenhagen interpretation :in the latter , positions become definite only as a result of the measurement process .positions are denoted where denotes the particle and denotes the spatial coordinates .the microstate , where , is a point in the dimensional configuration space .the objective is to predict the particle positions and their motion .the first assumption is that particles follow continuous trajectories so that the motion can be analyzed as a sequence of short steps .we use the method of maximum entropy to find the probability that the particles take infinitesimally short steps from to .the information that the steps are infinitesimal is given by constraints , (eventually we take the limit imposing continuous motion ) . to introduce correlationswe impose one additional constraint , , is another small constant , and is the drift potential .it is this single constraint acting on the dimensional configuration space that leads to quantum effects such as interference and entanglement .the physical nature of the potential need not concern us here .we merely postulate its existence and note is closely related to the phase of the wave function .affects the motion of particles it plays the role of a pilot wave or an electromagnetic field .indeed , is _ as real as _ the vector potential .the close relation between them is the gauge symmetry ( see ) . ]the result of maximizing entropy with an appropriate choice of lagrange multipliers leads to ~ , \label{prob xp / x a}\]]where is a normalization constant , is the time interval between and , the s are particle - specific constants called masses , and is a constant that fixes the units of time relative to those of length and mass . once we have for an infinitesimal step we can ask how a probability distribution evolves from one step to the next step by considering , iterating this process for very short steps allows one to find the evolution of to be given by the fokker - planck equation ( an explication can be found in ) , is the velocity of the probability flow in configuration space or _ current velocity _ , ( [ fp b ] ) describes a standard diffusion . to obtain a non - dissipative dynamics the choice of drift potential must be revised after each step , which means that the drift potential , or equivalently , the phase , is promoted to a dynamical degree of freedom .the correct updating , , follows from requiring that a certain functional ] and the parameter estimation scheme in to find in agreement with .there are several potentially interesting weak values but in particular consider the operator post selected in a momentum state , which is proportional to the full wavefunction and is a constant that will be removed after normalization .et al _ show that the real and imaginary parts of are proportional to the position and momentum shifts of the pointer variable and claim they are measuring the wavefunction . from the ed perspective it is more appropriate to state that the value of the wavefunction at each is being inferred .if the value of is inferred with certainty then it is possible to solve for the phase of the wavefunction ( up to an additive constant ) and also infer the values of the drift potential and its derivatives .this provides a link to wiseman s use of weak values to measure the probability current in bohmian mechanics . especially in cases as exotic as the above , ed takes the standpoint that weak values and quantities other than position ( energy , momentum , etc . )are best considered as epistemic inferables " rather than ontic beables or observables .we have discussed the von neumann and weak measurement schemes in the framework of entropic dynamics and have offered solutions to the problems of _ definite values _ and of the _ preferred basis_. from the perspective of ed quantum mechanics is a framework for updating probabilities , for processing information .when the quantum system is left undisturbed it evolves smoothly and the appropriate tool for updating is the schrdinger equation ; when the system is coupled to a measuring device the appropriate tool to handle information in the form of data is bayes theorem .there is no conflict : information of different types is handled differently .the fact that position plays a privileged role positions are `` beables '' provides us with a natural pointer variable . in ed quantitiesnormally called `` observables '' such as energy and momentum , and the quantities called `` weak values '' are not ontic properties of the system .they are epistemic properties of the wave function .none of these quantities are beables and they can not be observed directly . instead they can be inferred by appropriate position measurements either of the system itself or of another system with which it is suitably entangled .such quantities should more appropriately be referred as " inferables . it is the explicit recognition of the ontic nature of positions vs the epistemic nature of other inferables that allows one to circumvent the paradoxes and confusion that have surrounded the problem of measurement .we would to like the reviewers and also : m. abedi , d. barolomeo , n. carrara , a. fernandes , t. gai , s. ipek , k. knuth , s. nawaz , p. pessoa , j. tian , j. walsh , and a. yousefi for their many discussions on entropy , inference , and quantum mechanics .a. caticha , _ entropic inference and the foundations of physics _( monograph commissioned by the 11th brazilian meeting on bayesian statistics ebeb-2012 ) ; http://www.albany.edu/physics/acaticha-eifp-book.pdf . j. von neumann , _ mathematische grundlagen der quantenmechanik _ ( springer - verlag , berlin , 1932 ) [ english translation : _ mathematical foundations of quantum mechanics _( princeton university press , princeton , 1983 ) ] .a. demme , a. caticha , _ the classical limit of entropic quantum dynamics _, presented at maxent 2016 , the 36th international workshop on bayesian inference and maximum entropy methods in science and engineering ( ghent , belgium 2016 ) , in these proceedings . | the problem of measurement in quantum mechanics is studied within the entropic dynamics framework . we discuss von neumann and weak measurements , wavefunction collapse , and weak values as examples of bayesian and entropic inference . |
optical interferometry provides physicists and astronomers with an exquisite set of probes of the micro and macrocosmos . from the laboratory to the observatory over the past few decades , there has been a surge of activity in developing new tools for ground - based optical astronomy , of which interferometry is one of the most powerful .an optical interferometer is a device that combines two or more light waves emitted from the same source at the same time to produce interference fringes .the implementation of interferometry in optical astronomy began more than a century ago with the work of fizeau ( 1868 ) .michelson and pease ( 1921 ) measured successfully the angular diameter of ( ori ) , using an interferometer based on two flat mirrors , which allowed them to measure the fringe visibility in the interference pattern formed by starlight at the detector plane .however , progress was hindered by the severe image degradation produced by atmospheric turbulence in the optical spectrum .the field remained dormant until the development of intensity interferometry by hanbury brown and twiss ( 1958 ) , a technique that employs two adjacent sets of mirrors and photoelectric correlation .turbulence and the concomitant development of thermal convection in the atmosphere distort the phase and amplitude of an incoming wavefront of starlight .the longer the path , the greater the degradation that the image suffers .light reaching the entrance pupil of an imaging system is coherent only within patches of diameters of order r , fried s parameter ( fried , 1966 ) .this limited coherence causes blurring of the image , blurring that is modeled by a convolution with the point - spread function ( psf ) . both the sharpness of astronomical images and the signal - to - noise ( s / n ) ratio ( hence faintness of the objects that can be studied ) depend on angular resolution , the latter because noise comes from as much of the sky as is in the resolution element .thus reducing the beam width from , say , 1 arcsecond ( ) to 0.5 reduces sky noise by a factor of 4 .two physical phenomena limit the minimum resolvable angle at optical and infrared ( ir ) wavelengths - diameter of the collecting area and turbulence in the atmosphere .the crossover between domination by aperture size ( /aperture diameter ) and domination by atmospheric turbulence ( ` seeing ' ) occurs when the aperture becomes larger than the size of a characteristic turbulent element .the image of a star obtained through a large telescope looks ` speckled ' or grainy because different parts of the image are blurred by small areas of turbulence in the earth s atmosphere .labeyrie ( 1970 ) proposed speckle interferometry ( si ) , a process that deciphers the diffraction - limited fourier spectrum and image features of stellar objects by taking a large number of very - short - exposure images of the same field .computer assistance is then used to reconstruct from these many images a single image that is free of turbulent areas - in essence , an image of the object as it might appear from space .the success of speckle interferometry in measuring the diameters of stars encouraged astronomers to develop further image - processing techniques .these techniques are , for the most part , post - detection processes .recent advances in technology have produced the hardware to compensate for wave - front distortion in real time .adaptive optics ( ao ; babcock , 1953 ; rousset et al . ,1990 ) is based on this hardware - oriented approach , which sharpens telescope images blurred by the earth s atmosphere .it employs a combination of deformation of reflecting surfaces ( i.e. , flexible mirrors ) and post - detection image restoration ( roddier , 1999 ) .one of its most successful applications has been in imaging of neptune s ring arcs .adaptive optical imaging systems have been treated in depth by the review of roggemann et al .( 1997 ) , which includes discussion of wavefront compensation devices , wavefront sensors , control systems , performance of ao systems , and representative experimental results .it also deals with the characterization of atmospheric turbulence , the si technique , and deconvolution techniques for wavefront sensing .long - baseline optical interferometry ( lboi ) uses two or more optical telescopes to synthesize the angular resolution of a much larger aperture ( aperture synthesis ) than would be possible with a single mirror .labeyrie ( 1975 ) extended the concept of speckle interferometry to a pair of telescopes in a north - south baseline configuration , and subsequently astronomers have created larger ground - based arrays .a few of these arrays , e.g. , the keck interferometer and the very large telescope interferometer ( vlti ) , employ large telescopes fitted with ao systems .the combination of long - baseline interferometry , mimicking a wide aperture , and ao techniques to improve the images offers the best of both approaches and shows great promise for applications such as the search for extra - solar planets . at this pointit seems clear that interferometry and ao are complementary , and neither can reach its full potential without the other .the present review addresses the aims , methods , scientific achievements , and future prospects of long - baseline interferometry ( lbi ) at optical and infrared wavelengths , carried out with two or more apertures separated by more than their own sizes . in order to embark on such a subject ,we first review the basic principles of interferometry and its applications , the theoretical aspects of si as a statistical analysis of a speckle pattern , and the limitations imposed by the atmosphere and the detectors on the performance of single - aperture diffraction - limited imaging . other related concerns , such as the relationship between image - plane techniques and pupil - plane interferometry , phase - closure methods , and dark speckle imaging , will also be treated .adaptive optics as a pre - detection compensation technique is described in brief , as are the strengths and weaknesses of pre and post detection . the final part of this review deals with the applications of multiple - telescope interferometry to imaging , astrometry , and nulling .these applications entail specific problems having to do with delaylines , beam recombination , polarization , dispersion , fringe tracking , and the recovery of visibility functions .various image restoration techniques are discussed as well , with emphasis on the deconvolution methods used in aperture - synthesis mapping .astronomical sources emit incoherent light consisting of the random superposition of numerous successive short - lived waves sent out from many elementary emitters , and therefore , the optical coherence is related to the various forms of correlations of the random process . for a monochromatic wave field ,the amplitude of vibration at any point is constant and the phase varies linearly with time .conversely , the amplitude and phase in the case of quasi - monochromatic wave field , undergo irregular fluctuations ( born and wolf , 1984 ) .the rapidity of fluctuations depends on the light crossing time of the emitting region .interferometers based on ( i ) wavefront division ( young s experiment that is sensitive to the size and bandwidth of the source ) , ( ii ) amplitude division ( michelson s interferometer ) are generally used to measure spatial coherence and temporal coherence , respectively . in what follows ,some of the fundamental mathematical steps pertinent to the interferometry are illustrated .a monochromatic plane wave , at a point , , is expressed as , }\right\}.\ ] ] here the symbol is the ` real part of ' , the position vector of a point , is the amplitude of the wave , the time , the frequency of the wave , the phase functions that are of the form , in which is the propagation vector , and the phase constants which specify the state of polarization , denoting the complex vector function of position by , the complex representation of the analytic signal , , associated with becomes , }\\ & & = \psi({\bf r})e^{-i2\pi\nu_\circ t}. \end{aligned}\ ] ] this complex representation is preferred for linear time invariant systems , because the eigenfunctions of such systems are of the form , where is the angular frequency . from equations ( 1 ) and ( 2 ), the relationship translates into , , \end{aligned}\ ] ] the intensity of light is defined as the time average of the amount of energy , therefore , taking the latter over an interval much greater than the period , , the intensity at the same point is calculated as , where stands for the ensemble average of the quantity within the bracket and represents for the complex conjugate of .since the complex amplitude is a constant phasor in the monochromatic case , the fourier transform ( ft ) of the complex representation of the signal , , is given by , it is equal to twice the positive part of the instantaneous spectrum , . in the polychromatic case , the complex amplitude becomes , the disturbance produced by a real physical source is calculated by the integration of the monochromatic signals over an optical band pass . in the case of quasi - monochromatic approximation (if the width of the spectrum , ) , the expression modifies as , },\ ] ] where the field is characterized by the complex amplitude , , _i.e. _ , this phasor is time dependent , although it varies slowly with respect to . the complex amplitude , diffracted at angle in the telescope focal - plane is given by , where is a two - dimensional ( 2-d ) space vector , the focal length , the complex amplitude at the telescope aperture , and the pupil transmission function of the telescope aperture .for an ideal telescope , we have inside the aperture and outside the aperture . in the space - invariant case , ,\end{aligned}\ ] ] where represents for the complex ft and the dimensionless variable is equal to and hence , can be replaced by and so with by .the irradiance diffracted in the direction is the psf of the telescope and the atmosphere and its ft , is the optical transfer function ( otf ) : } d{\pmb \alpha},\ ] ] where is the spatial frequency expressed in radian , and the modulation transfer function ( mtf ) .the convolution of two functions simulates phenomena such as a blurring of a photograph that may be caused by various reasons , e.g. , ( a ) poor focus , ( b ) motion of a photographer during the exposure . in such a blurred picture each point of objectis replaced by a spread function .for the 2-d incoherent source , the complex amplitude in the image - plane is the convolution of complex amplitude of the pupil plane and the pupil transmission function . in the fourier plane , the effect of the convolution becomes a multiplication , point by point of the otf of the pupil , , with the transform of the object .i.e. , the illumination at the focal - plane of the telescope observed as a function of image - plane is , |^2.\end{aligned}\ ] ] the autocorrelation of this function , , is expressed as , = \widehat{\cal s}({\bf f } ) \widehat{\cal s}^\ast ( { \bf f } ) = |\widehat{\cal s}{\bf ( f)}|^2,\ ] ] where stands for correlation . in an ideal condition ,the resolution that can be achieved in an imaging experiment , , is limited only by the imperfections in the optical system and according to strehl s criterion , the resolving power , , of any telescope of diameter is given by the integral of its transfer function , therefore , . the strehl ratio is defined as the ratio of the intensity at the centroid of the observed psf , , to the intensity of the peak of the diffraction - limited image or ` airy spike ' , , i.e. , where is the wave number and the rms optical path difference ( opd ) error .typical ground - based observations with large telescopes in the visible wavelength range are made with a strehl ratio ( babcock , 1990 ) , while a diffraction - limited telescope would , by definition , have a strehl ratio of 1 .when two light beams from a single source are superposed , the intensity at the place of superposition varies from point to point between maxima , which exceed the sum of the intensities in the beams , and minima , which may be zero .this sum or difference is known as interference ; the correlated fluctuation can be partially or completely coherent ( born and wolf , 1984 ) .let the two monochromatic waves and be superposed at the recombination point .the correlator sums the instantaneous amplitudes of the fields .the total field at the output is , then if and are the complex amplitudes of the two waves , with the corresponding phases and , these two waves are propagating in direction and linearly polarized with electric field vector in direction .( a general radiation field is generally described by four stokes parameters , , , and , that specify intensity , the degree of polarization , the plane of polarization and the ellipticity of the radiation at each point and in any given direction ) .therefore , the total intensity , ( see equation 4 ) , at the same point can be determined as , where , and , are the intensities of the two terms , and , is the interference term that depends on the amplitude components , as well as on the phase - difference between the two waves , , ( , is the opd between the two waves from the common source to the intersecting point and is the wavelength in vacuum ) . in general , two light beams are not correlated but the correlation term , , takes on significant values for a short period of time and = 0 .time variations of are statistical in nature ( mendel and wolf , 1995 ) .hence , one seeks a statistical description of the field ( correlations ) as the field is due to a partially coherent source . depending upon the correlations between the phasor amplitudes at different object points, one would expect a definite correlation between the two points of the field emitted by the object .the maximum and minimum intensity occur , when and , respectively . if , the intensity varies between 4 , and 0 . in the case of quasi - monochromatic wave , the analytical signal , , obtained at the observation pointis expressed as , where s are constants , s the positions of two pinholes in the wave field , j = 1 , 2 , s the distances of a meeting point of the two beams from the two pinholes , , the time needed to travel from the respective pinholes to the meeting point , and the velocity of light .if the pinholes are small and the diffracted fields are considered to be uniform , the value of the constants , turns out to be , and noting , , and therefore , the intensity at the output is found to be , .\ ] ] the van cittert - zernike theorem states that the modulus of the complex degree of coherence ( describes the correlation of vibrations at a fixed point and a variable point ) in a plane illuminated by a incoherent quasi - monochromatic source is equal to the modulus of the normalized spatial ft of its brightness distribution ( born and wolf , 1984 , mendel and wolf , 1995 ) .the observed image is the ft of the mutual coherence function or the correlation function. the complex degree of ( mutual ) coherence , , of the observed source is defined as , where and .the function , , is measured at two points . at a point where both the points coincide, the self coherence , , reduces to ordinary intensity .when , .the ensemble average can be replaced by a time average due to the assumed ergodicity ( a random process that is strictly stationary ) of the fields .if both the fields are directed on a quadratic detector , it yields the desired cross - term ( time average due to the finite time response ) .the measured intensity at the detector would be , .\ ] ] in order to keep the time correlation close to unity , the delay , , must be limited to a small fraction of the temporal width or coherence time , ; here , is the spectral width .the relative coherence of the two beams diminishes with the increase of path length difference , culminating in a drop in the visibility ( a dimensionless number between zero and one that indicates the extent to which a source is resolved on the baseline being used ) of the fringes . for ,the function , , can be approximated to , .the exponential term is nearly constant and , measures the spatial coherence .let , be the argument of , then , .\ ] ] the measured intensity at a distance from the origin ( point at zero opd ) on a screen at a distance , , from the apertures is where , is the opd corresponding to , and the distance between the two apertures .the modulus of the fringe visibility is estimated as the ratio of high frequency to low frequency energy in the average spectral density ; the visibility of fringes , , is estimated as , fizeau ( 1868 ) suggested that installing a screen with two holes in front of a telescope would allow measurements of stellar diameters with diffraction - limited resolution . in this set up , the beams are diffracted by the sub - apertures and the telescope acts as both collector and correlator .therefore , temporal coherence is automatically obtained due to the built - in zero opd . the spatial modulation frequency , as well as the required sampling of the image change with the separation of sub - apertures .the maximum resolution in this case depends on the separation between the sub - apertures ; the maximum spacings that can be explored are limited by the physical diameter of the telescope .the number of stellar sources for measuring diameters is also limited .one of the first significant results was the measurement of the diameter of the satellites of jupiter with a fizeau interferometer on the 40-inch yerkes refractor by michelson ( 1891 ) . with the 100-inch telescope on mt .wilson anderson ( 1920 ) determined the angular separation ( ) of spectroscopic binary star capella .results from the classical michelson interferometer were used to formulate special relativity .they are also being used in gravity - wave detection . gravitational radiation produced by coalescing binaries , or exploding stars , for example , changes the metric of spacetime .this effect causes a differential change of the path length of the arms of the interferometer , thereby introducing a phase - shift . today, there are several ground - based long baseline laser - interferometric detectors based on this principle under construction , and within the next several years these detectors should be in operation ( robertson , 2000 ) .the proposed laser interferometer space antenna , consisting of three satellites in formation about 50 million kilometers ( km ) above the earth in a heliocentric orbit , may detect gravitational waves by measuring fluctuations in the distances between test masses carried by the satellites .the essence of the michelson s stellar interferometer is to determine the covariance of the complex amplitudes , at two different points of the wavefronts .this interferometer was equipped with four flat mirrors that fold the beams by installing a 7-meter ( m ) steel beam on top of the mt .wilson 100-inch telescope .michelson and pease ( 1921 ) resolved the supergiant ori and a few other stars . in this case , the spatial modulation frequency in the focal - plane is independent of the distance between the collectors . in the fizeau mode ,the ratio of aperture diameter / separation is constant from light collection to recombination in the image - plane ( homothetic pupil ) . in the michelson mode ,this ratio is not constant since the collimated beams have the same diameter from the output of the telescope to the recombination lens .the distance between pupils is equal to the baseline at the collection mirrors ( the resolution is limited by the baseline ) and to a much smaller value just before the recombination lens .the disadvantage of the michelson mode is a very narrow field of view compared to the fizeau mode .unfortunately the project was abandoned due to various difficulties , including ( i ) effect of atmospheric turbulence , ( ii ) variations of refractive index above the small sub - aperture , ( iii ) inadequate separation of outer mirrors , and ( iv ) mechanical instability .intensity interferometry considers the quantum theory of photon detection and correlation .it computes the fluctuations of the intensities , at two different points of the wavefronts .the fluctuations of the electrical signals from the two detectors are compared by a multiplier .the current output of each photoelectric detector is proportional to the instantaneous intensity of the incident light , which is the squared modulus of the amplitude .the fluctuation of the current output is proportional to this expression indicates that the covariance of the intensity fluctuations is the squared modulus of the covariance of the complex amplitude .having succeeded in completing the intensity interferometer at radio wavelengths ( hanbury brown et al .1952 ) , hanbury brown and twiss ( 1958 ) demonstrated its potential at optical wavelengths by measuring the angular diameter of sirius .subsequent development with a pair of 6.5 m light collector on a circular railway track spanning 188 m , provided the measurements of 32 southern binary stars ( hanbury brown , 1974 ) with angular resolution limit of 0.5 milli - arcseconds ( mas ) . in this arrangement ,starlight collected by two concave mirrors is focused on to two photoelectric cells and the correlation of fluctuations in the photocurrents is measured as a function of mirror separation .the advantages of such a system over michelson s interferometer are that it does not require high mechanical stability and remains unaffected by seeing .another noted advantage is that the alignment tolerances are extremely relaxed since the pathlengths need to be maintained to a fraction of , where is the electrical bandwidth of the post - detection electronics .the significant effect comes from scintillation induced by the atmosphere .the sensitivity of this interferometer was found to be very low ; it was limited by the narrow band - width filters that are used to increase the speckle life time .the correlated fluctuations can be obtained if the detectors are spaced by less than a speckle width .theoretical calculations ( roddier , 1988 ) show that the limiting visual magnitude ( mag ) , that can be observed with such a system is of the order of 2 ( the faintest stars visible in the naked eye are 6th magnitude . the magnitude scale is defined as = -2.5 log , where and are the apparent magnitudes of two objects of fluxes and , respectively ) .the density inhomogeneities appear to be created and maintained by the parameters , viz . ,thermal gradients , humidity fluctuations , and wind shears , which produce atmospheric turbulence and therefore refractive index inhomogeneities .the gradients caused by these environmental parameters warp the wavefront incident on the telescope pupil .the image quality is directly related to the statistics of the perturbations of the incoming wavefront .the theory of seeing combines the theory of atmospheric turbulence with the theory of optical physics to predict the modifications to the diffraction - limited image that the refractive index gradients produce ( young , 1974 , roddier , 1981 , coulman , 1985 ) .atmospheric turbulence has a significant effect on the propagation of radio - waves , sound - waves and on the flight of aircraft as well .this section is devoted to the descriptions of the atmospheric turbulence theory , metrology of seeing and its impact on stellar images .the random fluctuations in the atmospheric motions occur predominantly due to ( i ) the friction encountered by the air flow at the earth s surface and consequent formation of a wind - velocity profile with large vertical gradients , ( ii ) differential heating of different portions of the earth s surface and the concomitant development of thermal convection , ( iii ) processes associated with formation of clouds involving release of heat of condensation and crystallization , and subsequent changes in the nature of temperature and wind velocity fields , ( iv ) convergence and interaction of air - masses with various atmospheric fronts , and ( v ) obstruction of air - flows by mountain barriers that generate wave - like disturbances and rotor motions on their lee - side .the atmosphere is difficult to study due to the high reynolds number ( ) , a dimensionless quantity , that characterizes the turbulence . when the average velocity , , of a viscous fluid of characteristic size , , is gradually increased , two distinct states of fluid motion are observed ( tatarski , 1967 , ishimaru , 1978 ) , viz ., ( i ) laminar ( regular and smooth in space and time ) , at very low , and ( ii ) unstable and random fluid motion at greater than some critical value . the reynolds number , obtained by equating the inertial and viscous forces , is given by , where is a function of the flow geometry , , and kinematic viscosity of the fluid , .when exceeds critical value in a pipe ( depending on its geometry ) , a transition of the flow from laminar to turbulent or chaotic occurs . between these two extreme conditions ,the flow passes through a series of unstable states .high turbulence is chaotic in both space and time and exhibits considerable spatial structure .the velocity fluctuations occur on a wide range of space and time scales . according to the atmospheric turbulence model ( taylor , 1921 , kolmogorov , 1941b , 1941c ), the energy enters the flow at low frequencies at scale length , and spatial frequency , , as a direct result of the non - linearity of the navier - stokes equation governing fluid motion .the large - scale fluctuations , referred to as large eddies , have a size of the geometrically imposed outer scale length .these eddies are not universal with respect to flow geometry ; they vary according to the local conditions .conan et al . ( 2000 ) derived a mean value 24 m for a von krmn spectrum from the data obtained at cerro paranal , chile .the energy is transported to smaller and smaller loss - less eddies until at a small enough reynolds number , the kinetic energy of the flow is converted into heat by viscous dissipation resulting in a rapid drop in power spectral density , for , where is critical wave number .these changes are characterized by the inner scale length , , and spatial frequency , , where varies from a few millimeter near the ground to a centimeter ( cm ) high in the atmosphere .the small - scale fluctuations with sizes , known as the inertial subrange , where is the magnitude of , have universal statistics ( scale - invariant behavior ) independent of the flow geometry .the value of inertial subrange would be different at various locations on the site .the statistical distribution of the size and number of these eddies is characterized by , of , where is a randomly fluctuating term , and the time .the dependence of the refractive index of air , upon pressure , ( millibar ) and temperature , ( kelvin ) , at optical wavelengths is given by ( ishimaru , 1978 ) .the optically important property of the kolmogorov law is that the refractive index fluctuations are largest for the largest turbulent elements up to the outer scale of the turbulence . at sizes below the outer scale ,the one - dimensional ( 1-d ) power spectrum of the refractive index fluctuations falls off with -(5/3 ) power of frequency and is independent of the direction along the fluctuations are measured , i.e. , the small - scale fluctuations are isotropic ( young , 1974 ) . the three - dimensional ( 3-d ) power spectrum , , for the wave number , , in the case of inertial subrange , can be equated as , where is known as the structure constant of the refractive index fluctuations .this kolmogorov - obukhov model of turbulence , describing the power - law spectrum for the inertial intervals of wave numbers , is valid within the inertial subrange and is widely used for astronomical purposes ( tatarski 1993 ) .the refractive index structure function , , is defined as , which expresses its variance at two points , and .the structure functions are related to the covariance function , , through ,\ ] ] where and the covariance is the 3-d ft of the spectrum , ( roddier , 1981 ) .the structure function in the inertial range ( homogeneous and isotropic random field ) , according to kolmogorov ( 1941a ) depends on the magnitude of , as well as on the values of the rate of production or dissipation of turbulent energy and the rate of production or dissipation of temperature inhomogeneities .the refractive index is a function of of the temperature , and humidity , . and therefore , the expectation value of the variance of the fluctuations about the average of the refractive index is given by , it has been argued that in optical propagation , the last term is negligible , and that the second term is negligible for most astronomical observations. it could be significant , however , in high humidity situation , e.g. , a marine boundary layer ( roddier , 1981 ) .most treatments ignore the contribution from humidity and express the refractive index structure function ( tatarski , 1967 ) as , similarly , the velocity structure function , , and temperature structure function , , can also be derived ; the same form holds for the humidity structure function .the two structure coefficients and are related by , , and assuming pressure equilibrium , ( roddier , 1981 ) .several experiments confirm this two - thirds power law in the atmosphere ( wyngaard et al .1971 , coulman , 1974 , hartley et al . 1981 ,lopez , 1991 ) .robbe et al .( 1997 ) reported from the observations using a long baseline optical interferometer ( lboi ) , interfromtre deux tlescopes ( i2 t ; labeyrie , 1975 ) that most of the measured temporal spectra of the angle of arrival exhibit a behavior compatible with the said power law .davis and tango ( 1996 ) have measured the value of atmospheric coherence time that varied between and ms with the sydney university stellar interferometer ( susi ) .the value of ( in equation 33 ) depends on local conditions , as well as on the planetary boundary layer .the significant scale lengths , in the case of the former , depend on the local objects which primarily introduces changes in the inertial subrange and temperature differentials .the latter can be attributed to ( i ) surface boundary layer due to the ground convection , extending up to a few km height of the atmosphere , ( ) , ( ii ) the free convection layer associated with orographic disturbances , where the scale lengths are height dependent , ( ) , and ( iii ) in the tropopause and above , where the turbulence is due to the wind shear as the temperature gradient vanishes slowly . in real turbulent flows ,turbulence is usually generated at solid boundaries .near the boundaries , shear is the dominant source ( nelkin , 2000 ) , where scale lengths are roughly constant . in an experiment , conducted by cadot et al .( 1997 ) , it was found that kolmogorov scaling is a good approximation for the energy dissipation , as well as for the torque due to viscous stress .they measured the energy dissipation and the torque for circular couette flow with and without small vanes attached to the cylinders to break up the boundary layer .the theory of the turbulent flow in the neighborhood of a flat surface applies to the atmospheric surface layer .masciadri et al .( 1999 ) have noticed that the value of increases about 11 km over mt .paranal , chile .the turbulence concentrates into a thin layer of 100 m thickness , where the value of increases by more than an order of magnitude over its background level .the spatial correlational properties of the turbulence - induced field perturbations are evaluated by combining the basic turbulence theory with the stratification and phase screen approximations .the variance of the ray can be translated into a variance of the phase fluctuations . for calculating the same , roddier ( 1981 ) used the correlation properties for propagation through a single ( thin ) turbulence layer and then extended the procedure to account for many such layers .several investigators ( goodman , 1985 , troxel et al .1994 ) have argued that individual layers can be treated as independent provided the separation of the layer centers is chosen large enough so that the fluctuations of the log amplitude and phase introduced by different layers are uncorrelated .let a monochromatic plane wave of wavelength from a distant star at the zenith , propagates towards the ground - based observer ; the complex amplitude at co - ordinate , , is given by , the average value of the phase , for height h , and the unperturbed complex amplitude outside the atmosphere is normalized to unity ] ; therefore , the coherence function at ground level is given by , } \nonumber\\ & & = e^{-\frac{1}{2 } \left[2.91k^2\xi^{5/3 } \sum_{j=1}^n { \cal c}_n^2(h_j)\delta h_j\right]}. \end{aligned}\ ] ] this expression may be generalized for a star at an angular distance away from the zenith viewed through all over the turbulent atmosphere , }.\ ] ] the term ` seeing ' is the total effect of distortion in the path of the star light via different contributing layers of the atmosphere to the detector placed at the focus of the telescope . let the mtf of the atmosphere and a telescope together be described as in figure 1 .the long - exposure psf is defined by the ensemble average , , which is independent of the direction .if the object emits incoherently , the average illumination , , of a resolved object , , obeys the convolution relationship , using 2-d ft , the above equation translates into , where denotes the transfer function for long - exposure images , , the spatial frequency vector with magnitude , and the object spectrum .the argument of equation ( 56 ) is expressed as , where is the fourier phase at , stands for , ` the phase of ' , and the apertures , corresponding to the seeing cells .the transfer function is the product of the atmosphere transfer function ( wave coherence function ) , , and the telescope transfer function , , for a long - exposure through the atmosphere , the resolving power , , of any optical telescope can be expressed as , it is limited either by the telescope or by the atmosphere , depending on the relative width of the two functions , and . [ [ frieds - parameter ] ] fried s parameter + + + + + + + + + + + + + + + + + according to equation ( 54 ) , , can be expressed as , }. \end{aligned}\ ] ] therefore , equation ( 61 ) is translated into , ^{-6/5 } \gamma(6/5).\ ] ] fried , ( 1966 ) had introduced the critical diameter , for a telescope ; therefore , placing in equation ( 60 ) , equation ( 62 ) takes the following form , the phase structure function ( equation 43 ) across the telescope aperture ( fried , 1966 ) becomes , by replacing the value of , in equation ( 54 ) , an expression for in terms of the distribution of the turbulence in the atmosphere is found to be .^{-3/5}.\ ] ] the fried s parameter may be thought of as the diameter of telescope that would produce the same diffraction - limited fwhm of a point source image as the atmospheric turbulence would with an infinite - sized mirror .[ [ seeing - at - the - telescope - site ] ] seeing at the telescope site + + + + + + + + + + + + + + + + + + + + + + + + + + + + the major sources of image degradation predominantly comes from thermal and aero - dynamic disturbances in the atmosphere surrounding the telescope and its enclosure .these sources include : ( i ) convection in and around the building and the dome , obstructed location near the ground , off the surface of the telescope structure , ( ii ) thermal distortion of the primary and secondary mirrors when they are warmer than the ambient air , ( iii ) dissipation of heat by the secondary mirror ( zago , 1995 ) , ( iv ) rise in temperature at the primary mirror cell , and ( v ) at the focal point causing temperature gradient close to the detector .degradation in image quality can occur due to opto - mechanical aberrations as well as mechanical vibrations of the optical system .various corrective measures have been proposed to improve the seeing .these measures include : ( i ) insulating the surface of the floors and walls , ( ii ) introducing an active cooling system to eliminate the heat produced by electric equipment on the telescope and elsewhere in the dome , and ( iii ) installing a ventilator to generate a downward air flow through the slit to counteract the upward action of the bubbles ( racine , 1984 , ryan and wood , 1995 ) .floor - chilling systems to dampen the natural convection have been implemented which keeps the temperature of the primary mirror closer to the air volume ( zago , 1995 ) .saha and chinnappan ( 1999 ) have found that the average observed is higher during the later part of the night than the earlier part .this change might indicate that the slowly cooling mirror creates thermal instabilities that decrease slowly during the night .ever since the development of the si technique ( labeyrie , 1970 ) , it is widely employed both in the visible , as well as in the infrared ( ir ) bands at telescopes to decipher diffraction - limited informations . the following sub - sections deal with single aperture speckle imaging and related avenues , other techniques , ao imaging systems , dark speckle imaging , and high resolution sensors .if a point source is imaged through the telescope by using the pupil function consisting of two sub - apertures ( ) , corresponding to the two seeing cells separated by a vector , a fringe pattern is produced with narrow spatial frequency bandwidth that moves within broad psf envelope ; with increasing distance between the sub - apertures , the fringes move with an increasingly larger amplitude .the introduction of a third sub - aperture gives three pairs of sub - apertures and yields the appearance of three intersecting patterns of moving fringes .covering the telescope aperture with -sized sub - apertures synthesizes a filled aperture ( each pair of them , , is separated by a baseline ) interferometer .the intensity at the focal - plane , , according to the diffraction theory ( born and wolf , 1984 ) , is determined by the expression , the term , , is multiplied by , where is the random instantaneous shift in the fringe pattern .each sub - aperture is small enough for the field to be coherent over its extent .atmospheric turbulence does not affect the contrast of the fringes produced but introduces phase delays .if the integration time is shorter than the evolution time of the phase inhomogeneities , the interference fringes are preserved but their phases are randomly distorted , which produces ` speckles ' .( the formation of speckles stems from the summation of coherent vibrations having random characteristics .it can be modeled as a 2-dimensional random walk with fresnel s vector representation of vibrations ) .each speckle covers an area of the same order of magnitude as the airy disc of the telescope .the number of correlation cells is proportional to the square of and the number of photons , , per speckle is independent of its diameter .the lifetime of speckles , , where is the velocity dispersion in the turbulent seeing layers across the line of sight .the structure of the speckle pattern changes randomly over a short interval of time .the sum of several such statistically uncorrelated patterns from a point source can result in a uniform patch of light a few arcseconds ( ) wide .figures 2 and 3 depict the speckles of a binary star hr4689 and the results of summing 128 specklegrams , respectively .averaging over many frames , the resultant for frequencies greater than , tends to zero because the phase - difference , , mod , between the two apertures is distributed uniformly between , with zero mean .the fourier component performs a random walk in the complex plane and averages to zero : in general , a high quantum efficiency detector is needed to record magnified short - exposure images for such observation . to compensate for atmospherically induced dispersion at zenith angles larger than a few degrees , either a counter - rotating computer - controlled dispersion - correcting prism or a narrow - bandwidth filteris used .speckle interferometry estimates a ` power spectrum ' which is the ensemble average of the squared modulus of an ensemble of ft from a set of specklegrams , .the intensity of the image , , in the case of quasi - monochromatic incoherent source can be expressed as , where , is an object at a point anywhere in the field of view .the variability of the corrugated wavefront yields ` speckle boiling ' and is the source of speckle noise that arises from difference in registration between the evolving speckle pattern and the boundary of the psf area in the focal - plane .these specklegrams have additive noise contamination , , which includes all additive measurement of uncertainties .this may be in the form of ( i ) photon statistics noise , and ( ii ) all distortions from the idealized iso - planatic model represented by the convolution of with that includes non - linear geometrical distortions .for each of the short - exposure instantaneous records , the imaging equation applies , denoting , for the noise spectrum .the fourier space relationship between object and the image is taking the modulus square of the expression and averaging over many frames , the average image power spectrum is , since is a random function in which the detail is continuously changing , its ensemble average becomes smoother . by the wiener - khintchine theorem (mendel and wolf , 1995 ) , the inverse ft of equation ( 73 ) gives the autocorrelation of the object , ] is the weighting in relation with the brightness of the brightest pixel .the choice of the same quantity can be made as .an array of impulse is constructed by putting an impulse at each of the center of gravity with a weight proportional to the speckle intensity .this impulse array is considered to be an approximation of the instantaneous psf and is cross - correlated with the speckle frame . disregarding the peaks lower than the pre - set threshold , the m speckle mask , ,is defined by , the masked speckled image , , is expressed as , the lynds - worden - harvey image is obtained by averaging equation ( 127 ) . for direct speckle imaging , the saa image , , is a contaminated one containing two complications - a convolution , and an additive residual , - which means , where being the constant position vectors and , the positive constant .it is essential to calibrate with an unresolved point source and reduce it in the same way to produce .the estimate for the object , , is evaluated from the inverse ft of the following equation , which is the first approximation of the object irradiance .this method is found to be insensitive to the telescope aberrations but sensitive to dominating photon noise .another method called ` selective image reconstruction ' selects the few sharpest images that are recorded when the atmospheric distortion is naturally at minimum , from a large dataset of short - exposures ( dantowitz et al .baldwin et al . ( 2001 ) have demonstrated the potential of such a technique .the knox - thomson ( kt ) method ( knox and thomson , 1974 ) defines the correlation of and multiplied by a complex exponential factor with spatial frequency vector .the approximate phase - closure is achieved by two vectors ( see figure 15 ) , and , assuming that the pupil phase is constant over .let the general second - order moment be the cross spectrum , .it takes significant values only if ; the typical value of is .2 - 0.5 .invoking equation ( 70 ) , a 2-d irradiance distribution , and its ft , , is defined by the equation , in image space , the correlations of , is derived as , where are 2-d spatial co - ordinate vectors .the kt correlation may be defined in fourier space as products of , , where , and are 2-d spatial frequency vectors . is a small , constant offset spatial frequency .a number of sub - planes is used by taking different values of .the argument of the equation ( 133 ) provides the phase - difference between the two spatial frequencies separated by and is expressed as , therefore , the equation ( 133 ) translates into , }. \end{aligned}\ ] ] the object phase - spectrum of the equation ( 135 ) is encoded in the term , } ] . if equation ( 135 ) is averaged over a large number of frames , the feature . } \nonumber\\ & & < \widehat{\cal s}({\bf u}_1)\widehat{\cal s}^\ast({\bf u}_1 + { \bf \delta u } ) > , \end{aligned}\ ] ] from which , together with equation ( 73 ) , the object phase - spectrum , , can be determined .the triple correlation ( tc ) is a generalization of closure phase technique ( section iv.b.2 . )that is used in radio / optical interferometry where the number of closure phases is small compared to those available from bispectrum ( weigelt , 1977 , lohmann et al .it is insensitive to ( i ) the atmospherically induced random phase errors , ( ii ) the random motion of the image centroid , and ( iii ) the permanent phase errors introduced by telescope aberrations ; any linear phase term in the object phase cancels out as well .the images are not required to be shifted to common centroid prior to computing the bispectrum .the other advantages are : ( i ) it provides information about the object phases with better s / n ratio from a limited number of frames , and ( ii ) it serves as the means of image recovery with diluted coherent arrays ( reinheimer and weigelt , 1987 ) .the disadvantage of this technique is that it demands severe constraints on the computing facilities with 2-d data since the calculations are four - dimensional ( 4-d ) .a triple correlation is obtained by multiplying a shifted object , , with the original object , , followed by cross - correlating the result with the original one ( for example , in the case of a close binary star , the shift is equal to the angular separation between the stars , masking one of the two components of each double speckle ) .the calculation of the ensemble average tc is given by , where are 2-d spatial co - ordinate vectors .the bispectrum is given by , where , , and .the object bispectrum is given by , the modulus of the object ft can be evaluated from the object bispectrum . the argument of equation ( 138 ) gives the phase - difference and is expressed as , the object phase - spectrum is encoded in the term .it is corrupted in a single realization by the random phase - differences due to the atmosphere - telescope otf , } ] represents the image to be restored , and $ ] is known as prior image .it can be shown that ; the equality holds if . the value of is a measure of the similarity between and if the entropy is maximized without any data constraints . with data constraints ,the maximum of will be less than its absolute maximum value zero , meaning that has been modified and deviated from its prior model .mem solves the multi - dimensional constraints minimization problem .it uses only those measured data and derives a brightness distribution which is the most random , i.e. , has the maximum entropy of any brightness distribution consistent with the measured data . maintaining an adequate fit to the data , it reconstructs the final image that fits the data within the noise level .monnier et al .( 2001 ) have reconstructed the dust shells around two evolved stars , ik tau and vy cma using coverage from the contemporaneous observations at keck - i and iota .figure 16 depicts the mem reconstructions of the dust shells around these stars .their results clearly indicate that without adequate spatial resolution , it is improbable to cleanly separate out the contributions of the star from the hot inner radius of the shell ( left panel in figure 16 ) .they opined that image reconstructions from the interferometer data are not unique and yield results which depend heavily on the biases of a given reconstruction algorithm . by including the long baselines ( 20 m ) data from the iota interferometer ( right panel ) , the fraction of the flux arising from the central star can be included in the image reconstruction process by using the mem priorone can see for a dust shell such as ik tau , that additional iota data are critical in accurately interpreting the physical meaning of interferometer data .the thick dashed lines show the expected dust shell inner radius from the data obtained at the isi .[ [ self - calibration ] ] self calibration + + + + + + + + + + + + + + + + in case of producing images with accurate visibility amplitude and poor or absence of phases , self calibration can be used ( baldwin and warner , 1976 ) .cornwell and wilkinson , 1981 ) introduced a modification by explicitly solving for the telescope - specific error as part of the reconstructing step .the measured fourier phases are fit using a combination of intrinsic phases plus telescope phase errors .if the field of observation contains one dominating internal point source which can be used as an internal phase reference , the visibility phase at other spatial frequencies is derived . a hybrid map can be made with the measured amplitudes together with model phase distributions . since the measured amplitudes differ from the single point source model , the hybrid map diffuses from the model map .adding some new feature to the original model map , an improved model map is obtained in the next iteration . with clever selection of features to be added to the model in each iteration , the procedure converges .[ [ linear - approach ] ] linear approach + + + + + + + + + + + + + + + another approach to the deconvolution problem is to formulate it as a linear system of the form : and then use algebraic techniques to solve this system .the elements of * a * contain samples of the dirty beam , the elements of * b * are samples of the dirty image , while * x * contains components of the reconstructed image . without any additional constraints matrix * a * is singular : additional information has to be added to find a solution . assumptions that regularize the system include positivity and compact structure of the source .an algebraic approach is not new in itself , but practical applications of such techniques in astronomy have become feasible only recently , e.g. , for the non - negative least squares ( nnls , briggs , 1995 ) . [ [ wipe ] ] wipe + + + + lannes et al .( 1994 ) present another technique , wipe , based on a least squares approach , where support information is used to regularize the algorithm .again , a linear system similar to that mentioned in the previous paragraph is solved , but using a technique which iterates between ( ) and image - planes .unlike clean and mem , wipe suppresses excessive extrapolation of higher spatial frequencies during the deconvolution .optical interferometry in astronomy is a boon for astrophysical studies .the following sub - sections dwell on the astrophysical importance and the perspectives of interferometry .single aperture interferometry has been contributing to study of the sun and solar system , and of a variety of stellar astrophysical problems .the existence of solar features with sizes of the order of 100 km or smaller was found by means of speckle imaging ( harvey , 1972 , harvey and breckinridge , 1973 ) . from observations of photospheric granulation from disk center to limb at , wilken et al .( 1997 ) found a decrease of the relative rms - contrast of the center - to - limb of the granular intensity .a time series of high spatial resolution images reveal the highly dynamical evolution of sunspot fine structure , namely , umbral dots , penumbral filaments , facular points ( denker , 1998 ) .small - scale brightening in the vicinity of sunspots , were also observed in the wings of strong chromospheric absorption lines .these structures which are concomitant with strong magnetic fields show brightness variations close to the diffraction - limit of the vacuum tower telescope ( .16 at 550 nm ) , observatorio del teide ( tenerife ) . with the phase - diverse speckle method , seldin et al .( 1996 ) found that the photosphere at scales below 0.3 is highly dynamic .speckle imaging has been successful in resolving the heavenly dance of pluto - charon system ( bonneau and foy , 1980 ) , as well as in determining shapes of asteroids ( drummond et al .reconstructions of jupiter with sub - arcsecond resolution have also been carried out by saha et al .( 1997 ) .studies of close binary stars play a fundamental role in measuring stellar masses , providing a benchmark for stellar evolution calculations ; a long term benefit of interferometric imaging is a better calibration of the main - sequence mass - luminosity relationship .the parameter in obtaining masses involves combining the spectroscopic orbit with the astrometric orbit from the interferometric data .continuous observations are necessary to be carried out in order to derive accurate orbital elements and masses , luminosities and distances .the radiative transfer concerning the effects of irradiation on the line formation in the expanding atmospheres of the component that is distorted mainly by physical effects , viz.,(i ) rotation of the component , and ( ii ) the tidal effect can be modeled as well .more than 8000 interferometric observations of stellar objects have been reported so far , out of which 75% are of binary stars ( hartkopf et al .the separation of most of the new components discovered are found to be less than 0.25 . from an inspection of the interferometric data , mason et al .( 1999 ) have confirmed the binary nature of 848 objects , discovered by the hipparcos satellites ; prieur et al .( 2001 ) reported high angular resolution astrometric data of 43 binary stars that were also observed with same satellite .torres et al .( 1997 ) derived individual masses for tau using the distance information from tau ; they found that the empirical mass - luminosity relation in good agreement with theoretical models .gies et al .( 1997 ) measured the radial velocity for the massive binary 15 mon . with the speckle spectrograph , baba , kuwamura , miura et al .( 1994 ) have observed a binary star , and ( = 0.53 ) and found that the primary ( a be star ) has h in emission while the companion has h in absorption .the high angular polarization measurements of the pre - main sequence binary system z cma at 2.2 m by fischer et al .( 1998 ) reveal that both the components are polarized ; the secondary showed an unexpected by large degree of polarization .studies of multiple stars are also an important aspect that can reveal mysteries . for instance , the r136a was thought to be a very massive star with a mass of ( cassinelli et al . 1981 ) .speckle imagery revealed that r136a was a dense cluster of stars ( weigelt and baier , 1985 , pehlemann et al .r64 ( schertl et al . 1996 ) , hd97950 , and the central object of giant hii region starburst cluster ngc3603 ( hofmann et al .1995 ) have been observed as well .the star - like luminous blue variable ( lbv ) , carinae , an intriguing massive southern object , was found to be a multiple object ( weigelt and ebersberger , 1986 ) .the polarimetric reconstructed images with a 0.11 resolution in the h line of carina exhibit a compact structure elongated in consistence with the presence of a circumstellar equatorial disk ( falcke et al .1996 ) .many supergiants have extended gaseous envelope which can be imaged in their chromospheric lines .the diameter of a few such objects , ori and mira , are found to be wavelength dependent ( bonneau and labeyrie , 1973 , labeyrie et al .1977 , weigelt et al . 1996 ) .recent studies have also confirmed the asymmetries on their surfaces ; the presence of hotspots are reported as well ( wilson et al . 1992 , haniff et al . 1995 , bedding , zijlstra et al .1997 , tuthill et al .1997 , monnier et al . 1999 ,tuthill , monnier et al .the surface imaging of long period variable stars ( tuthill , haniff et al .1999 ) , vy cma reveals non - spherical circumstellar envelope ( wittkowski , langer et al .monnier et al .( 1999 ) have found emission to be one - sided , inhomogeneous and asymmetric in ir and derived the line - of - sight optical depths of its circumstellar dust shell .the radiative transfer modeling of the supergiant nml cyg reveals the multiple dust shell structures ( blcker et al .2001 ) .high resolution imagery may depict the spatial distribution of circumstellar matter surrounding objects which eject mass , particularly in young compact planetary nebula ( ypn ) or newly formed stars in addition to t tauri stars , late - type giants or supergiants .the large , older and evolved planetary nebula ( pn ) show a great variety of structure ( balick , 1987 ) that are ( a ) spherically symmetric ( a39 ) , ( b ) filamentary ( ngc6543 ) , ( c ) bipolar ( ngc6302 ) , and ( d ) peculiar ( a35 ) .the structure may form in the very early phases of the formation of the nebula itself which is very compact and unresolved . by making maps at many epochs , as well as by following the motion of specific structural features, one would be able to understand the dynamical processes at work .the structures could be different in different spectral lines e.g. , ionization stratification in ngc6720 ( hawley and miller , 1977 ) , and hence maps can be made in various atomic and ionic emission lines too .major results , such as , ( i ) measuring angular diameters of several ypns ( barlow et al .1986 , wood et al .( ii ) resolving five individual clouds around carbon star irc+10216 ( see figure 17 ) with a central peak surrounded by patchy circumstellar matter ( osterbart et al .1996 , weigelt et al .1998 ) , ( iii ) exhibiting two lobes of the evolved object , the red rectangle ( see figure 18 ) ( osterbart et al .1996 ) , ( iv ) revealing a spiral pinwheel in the dust around wr104 ( tuthill , monnier et al .1999 ) , and ( v ) depicting spherical dust shell around oxygen - rich agb star afgl 2290 ( gauger et al .1999 ) are to be noted ; images of the young star , lkh 101 in which the structure of the inner accretion disk is resolved have been reported as well ( tuthill et al .detailed information that is needed for the modeling of the 2-d radiative transfer concerning the symmetry spherical , axial or lack of clouds , plumes etc . of the objects can also be determined ( menshchikov and henning , 1997 , gauger et al .1999 ) .both novae and supernovae ( sn ) have a complex nature of shells viz . ,multiple , secondary and asymmetric .high resolution mapping may depict the events near the star and the interaction zones between gas clouds with different velocities .soon after the explosion of the supernova sn1987a , various observers monitored routinely the expansion of the shell in different wavelengths ( nisenson et al .1987 , papaliolios et al .1989 , wood et al . 1989 ) .the increasing diameter of the same in several spectral lines and the continuum was measured ( karovska et al . 1989 ) .nulsen et al .( 1990 ) have derived the velocity of the expansion as well and found that the size of this object was strongly wavelength dependent at the early epoch pre - nebular phase indicating stratification in its envelope . a bright source at 0.06 away from the sn1987a with 2.7 mag at h had also been detected .recently , nisenson and papaliolios ( 1999 ) have detected a faint second spot , 4.2 mag , on the opposite side of sn1987a with = 160 mas .another important field of observational astronomy is the study of the physical processes , viz ., temperature , density and velocity of gas in the active regions of active galactic nuclei ( agn ) ; optical imaging by emission lines on sub - arcsecond scales can reveal the structure of the narrow - line region .the scale of narrow - line regions is well resolved by the diffraction - limit of a moderate - sized telescope .the time variability of agns ranging from minutes to decades can also be studied .the ngc 1068 is an archetype type 2 seyfert galaxy .observations of this object corroborated with theoretical modeling like radiative transfer calculations have made significant contributions on its structure .ebstein et al .( 1989 ) found a bipolar structure of this object in the [ oiii ] emission line .near - ir observations at the keck i telescope trace a very compact central core and extended emission with a size of the order of 10 pc on either side of an unresolved nucleus ( weinberger et al .wittkowski , balega et al .( 1998 ) have resolved central 2.2 m core by bispectrum speckle interferometry at the diffraction - limit of the special astrophysical observatory ( sao ) 6 m telescope , with a fwhm size of pc for an assumed gaussian intensity distribution .figure 19 depicts the reconstructed image of the agn , ngc1068 .subsequent observations by wittkowski et al .( 1999 ) indicate that this compact core is asymmetric with a position angle of and an additional more extended structure in n - s direction out to pc .quasars ( qso ) may be gravitationally lensed by stellar objects such as , stars , galaxies , clusters of galaxies etc . , located along the line of sight .the aim of the high angular imagery of these qsos is to find their structure and components ; their number and structure as a probe of the distribution of the mass in the universe .the capability of resolving these objects in the range of 0.2 to 0.6 would allow the discovery of more lensing events . the gravitational image of the multiple qso pg1115 +08 was resolved by foy et al .( 1985 ) ; one of the bright components , discovered to be double ( hege et al .1981 ) , was found to be elongated that might be , according to them , due to a fifth component of the qso .most of the results that obtained from the ground - based telescopes equipped with ao systems are in the near - ir band ; while results at visible wave lengths continue to be sparse .the contributions are in the form of studying ( i ) planetary meteorology ( poulet and sicardy , 1996 , marco et al .1997 , roddier et al .1997 ) ; images of neptune s ring arcs are obtained ( sicardy et al . 1999 ) that are interpreted as gravitational effects by one or more moons , ( ii ) nucleus of m31 ( davidge , rigaut et al .1997 ) , ( iii ) young stars and multiple star systems ( bouvier et al . 1997 ) , ( iv ) galactic center ( davidge , simons et al .1997 ) , ( v ) seyfert galaxies , qso host galaxies ( hutchings et al .1998 , 1999 ) , and ( vi ) circumstellar environment ( roddier et al . 1996 ) .the images of the objects such as , ( a ) the nuclear region of ngc3690 in the interacting galaxy arp 299 ( lai et al .1999 ) , ( b ) the starburst / agns , ngc863 , ngc7469 , ngc1365 , ngc1068 , ( c ) the core of the globular cluster m13 ( lloyd - hart et al . 1998 ) . and ( d ) r136 ( brandl et al . ( 1996 ) etc .are obtained from the moderate - sized telescopes .the highest ever angular resolution ao images of the radio galaxy 3c294 in the near - ir bands have been obtained at keck observatory ( quirrenbach et al . 2001 ) .ao systems can also employed for studying young stars , multiple stars , natal disks , and related inward flows , jets and related outward flows , proto - planetary disks , brown dwarfs and planets .roddier et al .( 1996 ) have detected a binary system consisting of a k7-mo star with an m4 companion that rotates clockwise ; they suggest that the system might be surrounded by a warm unresolved disk .the massive star sanduleak-66 in the lmc was resolved into 12 components by heydari and beuzit ( 1994 ) .success in resolving companions to nearby dwarfs has been reported ( beuzit et al . 2001 , kenworthy et al . 2001 ) .macintosh et al . ( 2001 ) measured the position of the brown dwarf companion to twa5 and resolved the primary into an 0.055 double . the improved resolution of crowded fields like globular clusters would allow derivation of luminosity functions and spectral type , to analyze proper motions in their central area .simon et al .( 1999 ) have detected 292 stars in the dense trapezium star cluster of the orion nebula and resolved pairs to the diffraction - limit of a 2.2 m telescope .optical and near - ir observations of the close herbig ae / be binary star nx pup ( brandner et al .1995 ) , associated with the cometary globular cluster i , schller et al .( 1996 ) estimated the mass and age of both the components and suggest that circumstellar matter around the former could be described by a viscous accretion disk .line and continuum fluxes , and equivalent widths are also derived for the massive stars in the arches cluster ( blum et al .2001 ) .stellar populations in galaxies in near - ir region provides the peak of the spectral energy distribution for old populations .bedding , minniti et al .( 1997 ) have observed the sgr a window at the galactic center of the milky way .they have produced an ir luminosity function and color - magnitude diagram for 70 stars down to .5 mag .figure 20 depicts the adonis k image of the sgr a window .images have been obtained of the star forming region messier 16 ( currie et al . 1996 ) , the reflection nebula ngc2023 revealing small - scale structure in the associated molecular cloud , close to the exciting star , in orion ( rouan et al .close et al .( 1997 ) mapped near - ir polarimetric observations of the reflection nebula r mon resolving a faint source , 0.69 away from r mon and identified it as a t tauri star .monnier et al .( 1999 ) found a variety of dust condensations that include a large scattering plume , a bow shaped dust feature around the red supergiant vy cma ; a bright knot of emission 1 away from the star is also reported .they argued in favor of the presence of chaotic and violent dust formation processes around the star .imaging of proto - planetary nebula ( ppn ) , frosty leo and the red rectangle by roddier et al .( 1995 ) revealed a binary star at the origin of these ppns .imaging of the extragalactic objects particularly the central area of active galaxies where cold molecular gas and star formation occur is an important program . from the images of nucleus of ngc1068 ,rouan et al . (1998 ) , found several components that include : ( i ) an unresolved conspicuous core , ( ii ) an elongated structure , and ( iii ) large and small - scale spiral structures .lai et al . (1998 ) have recorded images of markarian 231 , a galaxy 160 mpc away demonstrating the limits of achievements in terms of morphological structures of distant objects .aretxaga et al .( 1998 ) reported the unambiguous detection of the host galaxy of a normal radio - quiet qso at high - redshift in k - band ; detection of emission line gas within the host galaxies of high redshift qsos has been reported as well ( hutchings et al .observations by ledoux et al .( 1998 ) of broad absorption line quasar apm 08279 + 5255 at z=3.87 show the object consists of a double source ( = 0.35 0.02 ; intensity ratio = 1.21 0.25 in h band ) .they proposed for a gravitational lensing hypothesis which came from the uniformity of the quasar spectrum as a function of the spatial position .search for molecular gas in high redshift normal galaxies in the foreground of the gravitationally lensed quasar q1208 + 1011 has also been made ( sams et al . 1996 ) .ao imaging of a few low and intermediate redshift quasars has been reported recently ( mrquez et al . 2001 ) .high resolution stellar coronagraphy is of paramount importance in ( i ) detecting low mass companions , e.g. , both white and brown dwarfs , dust shells around asymptotic giant branch ( agb ) and post - agb stars ,( ii ) observing nebulosities leading to the formation of a planetary system , ejected envelops , accretion disk , and ( iii ) understanding of structure ( torus , disk , jets , star forming regions ) , and dynamical process in the environment of agns and qsos . by means of coronagraphic techniques the environs of a few interesting objects have been explored .they include : ( i ) a very low mass companion to the astrometric binary gliese 105 a ( golimowski et al .1995 ) , ( ii ) a warp of the circumstellar disk around the star pic ( mouillet et al .1997 ) , ( iii ) highly asymmetric features in ag carina s circumstellar environment ( nota et al .1992 ) , ( iv ) bipolar nebula around the lbv r127 ( clampin et al .1993 ) , and ( v ) the remnant envelope of star formation around pre - main sequence stars ( nakajima and golimowski , 1995 ) .the main objective of lbois is to measure the diameters , distances , masses and luminosities of stars , to detect the morphological details , such as granulations , oblateness of giant stars , and the image features , i.e. , spots and flares on their surfaces .eclipsing binaries are also good candidates ; for they provide information on circumstellar envelopes such as , the diameter of inner envelope , color , symmetry , radial profile etc . as stated earlier ( in section vii.a.2 ), good spectroscopic and interferometric measurements are required to derive precise stellar masses since they depend on .a small variation on the inclination implies a large variation on the radial velocities .most of the orbital calculations that are carried out with speckle observations are not precise to provide masses better than 10% ( pourbaix , 2000 ) .the results obtained so far with lbois are from the area of stellar angular diameters with implications for emergent fluxes , effective temperatures , luminosities and structure of the stellar atmosphere , dust and gas envelopes , binary star orbits with impact on cluster distances and stellar masses , relative sizes of emission - line stars and emission region , stellar rotation , limb darkening , astrometry etc .( saha , 1999 , saha and morel , 2000 , quirrenbach , 2001 and references therein ) . the angular diameter for more than 50 stars has been measured ( dibenedetto and rabbia , 1987 , mozurkewich et al . 1991 , dyck et al . 1993 , nordgren et al . 2000 , perin et al .1999 , kervella et al .2001 , van belle et al .2001 ) with accuracy better than 1% in some cases .interesting results that have been obtained using i2 t and mark iii interferometers are : ( i ) measuring diameters , effective temperatures of giant stars ( faucherre et al .1983 , dibenedetto and rabbia , 1987 ) , ( ii ) resolving the gas envelope of the be star cas in the h line ( thom et al . 1986 ) , and structure of circumstellar shells ( bester et al . 1991 ) , and ( iii ) determining orbits for spectroscopic , and eclipsing binaries ( armstrong et al .1992 , shao and colavita , 1994 ) .the gi2 t is being used regularly to observe the be stars , lbvs , spectroscopic and eclipsing binaries , wavelength dependent objects , diameters of bright stars , and circumstellar envelopes . however , the scientific programs are restricted by the low limiting visible magnitude down to 5 ( seeing and visibility dependent ) . the first successful result that is reported on resolving the rotating envelope of hot star , cas came out of this instrument in 1989 .mourard et al .( 1989 ) observed this star with a spectral resolution of 1.5 centered on h .as many as ,000 short - exposure images were recorded by a photon - counting camera , cp40 ( blazit , 1986 ) . they have digitally processed each image , which contained photons , using the correlation technique .the results were co - added to reduce the effect of atmospheric seeing and photon noise , according to the principle of speckle interferometry ( labeyrie , 1970 ) . with the central star as a reference, they have determined the relative phase of the shell visibility and showed clearly the rotation of the envelope .this result demonstrates the potential of observations that combines spectral and spatial resolution . through subsequent observations on later dates ,stee et al . ( 1995 , 1998 ) derived the radiative transfer model . using the data obtained since 1988 with this instrument , berio , stee et al .( 1999 ) have found the evidence of a slowly prograde of rotating density pattern in the said star s equatorial disk .indeed cas has been a favorite target to the gi2 t , with further systematic monitoring of multiple emission lines , the formation , structure , and dynamics of other be stars can be addressed .the other noted results obtained in recent times with this instrument include the mean angular diameter and accurate distance estimate of cep ( mourard et al .1997 ) , subtle structures in circumstellar environment such as jets in the binary system lyr ( harmanec et al . 1996 ) , clumpiness in the wind of p cyg ( vakili et al .1997 ) , and detection of prograde one - armed oscillation in the equatorial disk of the be star tau ( vakili et al . 1998 ) . with the susi instrument , davis et al .( 1998 , 1999b ) have determined the diameter of cma with an accuracy of .8% . from the data obtained at the coast , aperture - synthesis maps of the double - lined spectroscopic binary aur ( baldwin et al .1996 ) depict the milli - arcsecond orbital motion of the system over a 15 day interval .images of ori reveals the presence of a circularly symmetric data with an unusual flat - topped and limb darkening profile ( burns et al .young et al . ( 2000 ) have found a strong variation in the apparent symmetry of the brightness distribution as function of wavelength .variations of the cycle of pulsation of the mira variable r leo have been measured ( burns et al .1998 ) . with iota ,the angular diameters and effective temperatures have been measured for carbon stars ( dyck et al .1996 ) , mira variables ( van belle et al .1996 , 1997 ) , cool giant stars ( perrin et al .1998 , 1999 , cepheids ( kervella et al . 2001 ) , and dust shell of ci cam ( traub et al . 1998 ) .millan - gabet et al .( 2001 ) have resolved circumstellar structure of herbig ae / be stars in near - ir .figure 21 depicts the examples of h - band visibility data and models for 2 sources .the lower right panel in figure 21 illustrates the observed lack of visibility variation with baseline rotation , consistent with circumstellar emission from dust which is distributed either in a spherical envelope or in a disk viewed almost face - on .figure 22 summarizes the existing set of measurements of near - ir sizes of said herbig sources ( credit : r. millan - gabet and j. d. monnier ) , using the data from danchi et al .( 2001 ) , millan - gabet et al .( 2001 ) , and tuthill et al .the agreement observed in most cases has motivated in part a revision of disk physics in models of herbig ae / be systems ( natta et al .2001 ) . in the new models ,the gas in the inner disk is optically thin so that dust at the inner edge is irradiated frontally , and expands forming a ` puffed - up ' inner wall . due to the extra heating ,compared to the irradiation of a flat disk traditionally considered , this model results in larger near - ir sources , essentially corresponding to the ` puffed up ' inner wall , which appears as a bright ring to the interferometer .monnier et al .( 2001 ) have reported the results of decomposing the dust and stellar signatures from the evolved stars ( figure 16 ) . with pti , malbet et al .( 1998 ) have resolved the young stellar object ( yso ) , fu ori in the near - ir with a projected resolution better than 2 au .measurements of diameters and effective temperatures of cool stars and cepheids have been reported ( van belle et al .1999 , lane et al . 2000 ) .the visual orbit for the spectroscopic binary peg with interferometric visibility data has also been derived ( boden et al . 1999 ) .direct observations of an oblate photosphere of a main sequence star , aql ( van belle et al .2001 ) have been carried out .ciardi et al .( 2001 ) have resolved the stellar disk of lyr in the near - ir .employing npoi , hummel et al .( 1998 ) have determined the orbital parameters of two spectroscopic binaries , uma ( mizar a ) , peg ( matar ) and derived masses and luminosities .the limb - darkened diameters have also been measured of late - type giant stars ( hajian et al .1998 , pauls et al .1998 , wittkowski et al . , 2001 ) , and cepheid variables ( nordgren et al .2000 ) .the observing programs of isi have been aimed at determining the spatial structure and temporal evolution of dust shell around long period variables . from the data obtained with this instrument at 11.15 m , danchi et al .( 1994 ) showed that the radius of dust formation depends on the spectral type of the stars .lopez et al . (1997 ) have found a strong dependence on the pulsation phase .observations have been made of nml cyg , sco , as well as of changes in the dust shell around mira and ik tau , ( hale et al .1997 , lopez et al .1997 ) , mid - ir molecular absorption features of ammonia and silane of irc+10216 and vy cma ( monnier et al .2000 ) .recent observations with the two of the vlt telescopes have measured the angular diameter of the blue dwarf eri , which was found to be 1.92 mas ( glindemann and paresce , 2001 ) .subsequently they ( i ) derived the diameters of a few red dwarf stars , and ( ii ) determined the variable diameters of a few pulsating cepheid stars , as well as the measurement of the core of carinae .the future of high resolution optical astronomy lies with the new generation arrays but its implementation is a challenging task .numerous technical challenges for developing such a system will require careful attention .nevertheless , steady progress has enabled scientists to expand their knowledge of astrophysical processes . with improved technology ,the interferometric arrays of large telescopes may provide snap - shot images at their recombined focus and yield improved images , spectra of quasar host galaxies , and astrometric detection and characterization of extra - solar planets . the expected limiting magnitude of a hyper - telescope imaging techniqueis found to be 8.3 by numerical solution if 10-cm apertures are used and 20 for 10-m apertures ( pedretti and labeyrie , 1999 ) .the limit is expected to increase with the carlina array ( labeyrie , 1999b ) , a 100-element hyper - telescope with a diameter of 200 m , shaped like arecibo radio telescope in puerto rico .high precision astrometry also helps in establishing the cosmic distance scale ; measurements of proper motion can confirm stars as members of cluster ( known distance ) that may elucidate the dynamics of the galaxy .the quest for extra - solar planets ( wallace et al .1998 ) is a challenge for narrow angle astrometry .very valuable astrometric results from space have already been obtained by the hipparcos satellite ( perryman , 1998 ) .hipparcos used phase - shift measurements of the temporal evolution of the photometric level of two stars seen drifting through a grid .the successor of hipparcos , gaia ( lindengren and perryman , 1996 ) , will probably use the same technique with improvements , yielding more accurate results on a larger number of objects .however , only space - borne interferometers will achieve very high precision angular measurements .as many as seventy - six jovian - size planets orbiting stars have been identified by the doppler - fizeau effect ( mayor and queloz , 1995 , butler and marcy , 1996 , http://exoplanets.org ) . for one of them , an atmosphere containingsodium ( observed in the sodium resonance doublet at 589.3 nm ) has been detected in absorption as the planet transits its parent star hd209458 ( charbonneau et al . , 2001 ) .it may also be possible to detect smaller planets by measuring the stellar photocenter motion due to the wobble .such photocenter measurements will require diffraction - limited imaging even for the best possible candidates .interferometry can be used to measure the ` wobble ' in the position of a star caused by the transverse component of a companion s motion .a planet orbiting around a star causes a revolution of the star around the center of gravity defined by the two masses .like galaxy velocity , this periodical short motion has a radial counterpart measurable from the ground by spectrometry .the aim of the space interferometers like darwin and tpf is the discovery and characterization of terrestrial planets around nearby stars ( closer than 15 pc ) by direct detection ( i.e. , involving the detection of photons from the planet and not from the star as it is done with doppler - fizeau effect detection or wobble detection ) .the difficulties for achieving earth - like planet detection come from ( i ) minimizing scattered light from the parent star and ( ii ) the presence of exo - zodiacal light ( infrared emission from the dust surrounding the parent star ) .interferometric nulling technique will be useful to address the first issue .the knowledge of the chemical composition of any planetary atmosphere gives hints about the likelihood to find carbon - based life .lovelock ( 1965 ) has suggested that the simultaneous presence on earth of a highly oxidized gas , like , and highly reduced gases , like and is the result of the biochemical activity .however , finding spectral signatures of these gases on an extra - solar planet would be very difficult .an alternative life indicator would be ozone ( ) , detectable as an absorption feature at 9.6 m . on earth , ozone is photochemically produced from and , as a component of the stratosphere , is not masked by other gases . finding ozone would , therefore , indicate a significant quantity of that should have been produced by photosynthesis ( lger et al .moreover , for a star - like the sun , detecting ozone can be done 1000 times faster than detecting at 0.76 m : estimates made by angel and woolf ( 1997 ) show that the requirements for planet detection in the visible with an 8 m telescope are not detectable with current technology .space - borne interferometry projects for years spanning from 2020 to 2050 already exist ; such projects must be regarded as drafts for future instruments . for the post - tpf era, nasa has imagined an enhanced version featuring four 25 m telescopes and a spectrometer .this interferometer would be able to detect an extra - solar planet lines of gases directly produced by biochemical activity .the next step proposed by nasa is an array of 25 telescopes , 40 m diameter each , that would yield 25 pixel images of an earth - like planet at 10 pc , revealing its geography and eventually oceans or chlorophyll zones .a comparable project has been proposed by labeyrie ( 1999a ) .it consists of 150 telescopes , 3 m diameter each , forming an interferometer with a 150 km maximum baseline .such an instrument equipped with a highly efficient coronagraph would give a 40 pixel image of an earth - like planet at 3 pc .figure 23 depicts a simulated image of an earth - like planet detection ( labeyrie , 1999b ) .developing lbi for lunar operation consisting of 20 to 27 off - axis parabolic segments carried by robotic hexapodes that are movable during observing run has also been suggested by arnold et al .earth - bound astronomical observations are strongly affected by atmospheric turbulence that set severe limits on the angular resolution , which in optical domain , is rarely better than 0.5 .a basic understanding of interference phenomenon is of paramount importance to other branches of physics and engineering as well . in recent years, a wide variety of applications of speckle patterns has been found in many areas . though the statistical properties of the speckle pattern is complicated , a detailed analysis of this pattern is useful in information processing .image - processing is a very important subject . a second - order moment ( power spectrum )analysis provides only the modulus of the object ft , whereas a third - order moment ( bispectrum ) analysis yields the phase allowing the object to be fully reconstructed . a more recent attempt to go beyond the third order ,e.g. , fourth - order moment ( trispectrum ) , illustrates its utility in finding optimal quadratic statistics through the weak gravitational lensing effect ( hu , 2001 ) .this algorithm provides a far more sensitive test than the bispectrum for some possible sources of non - gaussianity ( kunz et al . , 2001 ) , however , its implementation in optical imaging is a difficult computational task .deconvolution method is an important topic as well .it applies to imaging in general which covers methods spanning from simple linear deconvolution algorithms to complex non - linear algorithms . in astronomy, the field of research that has probably benefited the most from high angular resolution techniques using single telescopes , and will still benefit in the future , is undoubtedly the origin and evolution of stellar systems .this evolution starts with star formation , including multiplicity , and ends with the mass loss process which recycles heavier elements into the interstellar medium .large - scale star formation provides coupling between small - scale and large - scale processes .stellar chemical evolution or nucleo - synthesis that is a result of star formation activity further influences evolutionary process .high resolution observations are fruitful for the detection of proto - planetary disks and possibly planets ( either astrometrically , through their influence on the disk , or even directly ) .the technique is also being applied to studies of starburst and seyfert galaxies , agns , and quasars .studies of the morphology of stellar atmospheres , the circumstellar environment of nova or supernova , ypn , long period variables ( lpv ) , rapid variability of agns etc .are also essential . in spite of the limited capability of retrieving fully diffraction - limited images of the objects ,ao systems are now offered to users of large telescopes .ao observations have contributed to the study of the solar system , and added to the results of space borne instruments , for examples , monitoring of the volcanic activity on io or of the cloud cover on neptune , the detection of neptune s dark satellites and arcs , and the ongoing discovery of companions to asteroids etc . ; they are now greatly contributing to the study of the sun itself as well ( antoshkin et al .2000 , rimmele , 2000 ) .combination of ao systems with speckle imaging may enhance the results ( dayton et al . 1998 ) . by the end of the next decade ( post 2010 ) , observations using the ao system on a new generation telescope like owl ,will revolutionize the mapping of ultra - faint objects like blazars that exhibit the most rapid and the largest amplitude variations of all agn ( ulrich et al .1997 ) , extra - solar planets etc . ; certain aspects of galactic evolution like chemical evolution in the virgo cluster of galaxies can be studied as well .a host of basic problems needs a very high angular resolution for their solution . among others , an important fundamental problem that can be addressed withthe interferometry and ao is the origin and evolution of galaxies .the upcoming large facilities with phased arrays of multiple 8 - 10 m sub - apertures will provide larger collecting areas and higher spatial resolution simultaneously than the current interferometers .these instruments fitted with complete ao systems would be able to provide imaging and morphological informations on the faint extragalactic sources such as , galactic centers in the young universe , deep fields , and host galaxies .measurement of such objects may be made feasible by the instruments with a fairly complete coverage and large field of view .the derivations of motions and parallaxes of galactic centers seem to be feasible with phase reference techniques .origin of faint structures close to non - obscured central sources can also be studied in detail with interferometric polarization measurement .the capabilities of the proposed large arrays offer a revolution in the study of compact astronomical sources from the solar neighborhood to cosmological distances .the aim ranges from detecting other planetary systems to imaging the black hole driven central engines of quasars and active galaxies ; gamma - ray bursters may be the other candidate .another important scientific objective is the recording of spectra to derive velocity and to determine black hole masses as a function of redshift . at the beginning of the present millennium, several such arrays will be in operation both on the ground as well as in space .space - borne interferometers are currently planned to detect planets either astrometrically ( sim ) or directly ( tpf ) .projects like darwin and other ambitious imaging instruments may also come to function .the author expresses his gratitude to a. labeyrie , s. t. ridgway , and p. a. wehinger for comments on the review , and indebtedness to v. coud du foresto , p. r. lawson , and s. morel for valuable communications .thanks are also due to t. r. bedding , a. boccaletti , p. m. hinz , r. millan - gabet , j. d. monnier , r. osterbart , and m. wittkowski for providing the images , figures etc . , and granting permission for their reproduction , as well as to h. bradt and p. hickson for reading the manuscript . the services rendered by v. chinnappan , k. r. subramanian , and b. a. varghese are gratefully acknowledged. ables j. g. , 1974 , astron .* 15 * , 383 .aime c. , 2000 , j. opt . a : pure & appl . opt . , * 2 * , 411 .akeson r. , m. swain , and m. colavita , 2000 , spie , * 4006 * , 321 .anderson j. a. , 1920 , astrophys .j. , * 51 * , 263 . angel j. r. p. , 1994 , nature , * 368angel j. r. p. , 2001 , nature , * 409 * , 427. angel j. r. , and n. j. woolf , 1997 , astrophy .j. , * 475 * , 373 .antoshkin l. v. et al ., 2000 , spie , * 4007 * , 232 .aretxaga i. , d. mignant , j. melnick , r. terlevich , and b. boyle , 1998 , mon . not .r. astron .* 298 * , l13 .armstrong j. t. , c. a. hummel , and d. mozurkewich , 1992 , proc .eso - noao conf .eds . , j. m. beckers & f. merkle , eso , germany , 673 .armstrong j. et al . , 1998 ,j. , * 496 * , 550 .arnold l. , a. labeyrie , and d. mourard , 1996 , adv .space res . *18 * , 1148 .ayers g. r. , and j. c. dainty , 1988 , opt .* 13 * , 457 .baba n. , s. kuwamura , n. miura , and y. norimoto , 1994 , astrophys . j. , * 431 * , l111 .baba n. , s. kuwamura , and y. norimoto , 1994 , appl ., * 33 * , 6662 .baba n. , h. tomita , and n. miura , 1994 , appl . opt ., * 33 * , 4428 .babcock h. w. , 1953 , pub .pac , * 65 * , 229 .babcock h. w. , 1990 , science,*249 * , 253 .bagnuolo jr .w. g. , b. d. mason , d. j. barry , w. i. hartkopf , and h. a. mcalister , 1992 , astron .j. , * 103 * , 1399 .baldwin j. et al . , 1996 ,astrophys . ,* 306 * , l13 .baldwin j. , r. boysen , c. haniff , p. lawson , c. mackay , j. rogers , d. st - jacques , p. warner , d. wilson , and j. young , 1998 , spie . , * 3350 * , 736 .baldwin j. e. , c. a. haniff , c. d. mackay , and p. j. warner , 1986 , nature , * 320 * , 595 .baldwin j. , r. tubbs , g. cox , c. mackay , r. wilson , and m. andersen , 2001 , astron .astrophys . ,* 368 * , l1 .baldwin j. e. , and p. j. warner , 1978 , mon . not .r. astron ., * 182 * , 411 .balick b. , 1987 , astron .j. , * 94 * , 671 .barlow m. j. , b. l. morgan , c. standley , and h. vine , 1986 , mon . not .r. astron .* 223 * , 151 .barnaby d. , e. spillar , j. christou , and j. drummond , 2000 , astron .j. , * 119 * , 378 .bates r. , and m. mcdonnell , 1986 , ` image restoration and reconstruction ' , oxford eng . sc . , clarendon press .beckers j. m. , 1982 , opt .* 29 * , 361 .bedding t. r. , d. minniti , f. courbin , and b. sams , 1997 , astron ., * 326 * , 936 . bedding t. , a. zijlstra , o. von der lhe , j. robertson , r. marson , j. barton , and b. carter , 1997 , mon . not .r. astron .* 286 * , 957 .beichman c. , 1998 , spie , * 3350 * , 719 .benson j. , d. mozurkewich , and s. jefferies , 1998 , spie , * 3350 * , 493 .berger j. , p. haguenauer , p. kern , k. perraut , f. malbet , i. schanen , m. severi , r. millan - gabet , and w. traub , 2001 , astron ., * 376 * , l31 .berio p. , d. mourard , d. bonneau , o. chesneau , p. stee , n. thureau , and f. vakili , 1999 , j. opt .a. , * 16 * , 872 .berio p. , p. stee , f. vakili , d. mourard , d. bonneau , o. chesneau , n. thureau , d. le mignant , and r. hirata , 1999 , astron ., * 345 * , 203 .bester m. , w. c. danchi , c. g. degiacomi , and c. h. townes , 1991 , astrophys .j. , * 367 * , l27 . beuzit j. et al . , 2001 , astro - ph/0106277 .blazit a. , 1986 , spie , * 702 * , 259 .blcker t. , y. balega , k. -h .hofmann , and g , weigelt , 2001 , astron .( to appear ) .blum r. , d. schaerer , a. pasquali , m. heydari - malayeri , p. conti , and w. schmutz , 2001 , astron .j. , ( to appear ) .boccaletti a. , 2001 , private communication .boccaletti a. , c. moutou , d. mouillet , a. lagrange , and j. augereau , 2001 , astron .astrophys . , * 367 * , 371 .boccaletti a. , p. riaud , c. moutou , and a. labeyrie , 2000 , icarus , * 145 * , 628 .boden a. et al . , 1999 ,j. , * 515 * , 356 .bonneau d. , and r. foy , 1980 , astron .astrophys . , * 92 * , l1 .bonneau d. , and a. labeyrie , 1973 , astrophys .j , * 181 * , l1 .born m. , and e. wolf , 1984 , principles of optics , pergamon press .bouvier j. , f. rigaut , and d. nadeau , 1997 , astron .astrophys . , * 323 * , 139 . bracewell r. n. , 1978 , nature , * 274 * , 780 . brandl b. , b. j. sams , f. bertoldi , a. eckart , r. genzel , s. drapatz , r. hofmann , m. lowe , and a. quirrenbach , 1996 , astrophys . j. , * 466 * , 254 .brandner w. , j. bouvier , e. grebel , e. tessier , d. de winter , and j. l. beuzit , 1995 , astron .astrophys . ,* 298 * , 816 .briggs d. s. , 1995 , ph .d. thesis , new mexico institute of mining and technology .bruns d. , t. barnett , and d. sandler , 1997 , spie ., * 2871 * , 890 .burge j. h. , b. cuerdem , and j. r. p. angel , 2000 , spie , * 4013 * , 640 .burns d. et al . , 1997 ,mon . not .r. astron ., * 290 * , l11 . burns d. et al . , 1998 , mon . not .r. astron .* 297 * , 467 .butler r. , and g. marcy , 1996 , astrophys .j , * 464 * , l153 .cadot o. , y. couder , a. daerr , s. douady , and a. tsinocber , 1997 , phys ., * 56 * , 427 .callados m. , and m. vzquez , 1987 , astron .astrophys . , * 180 * , 223 . carleton n. et al . , 1994 , spie , * 2200 * , 152. cassinelli j. , j. mathis , and b. savage , 1981 , science , * 212 * , 1497 .charbonneau d. , t. brown , r. noyes , and r. gilfiland , 2001 , astrophys .j. , ( to appear ) .ciardi d. , g. van belle , r. akeson , r. thompson , e. a. lada , and s. howell , 2001 , astrophys .j. ( to appear ) .clampin m. , j. croker , f. paresce , and m. rafal , 1988 , rev .* 59 * , 1269 .clampin m. , a. nota , d. a. golimowski , c. leitherer , and a. ferrari , 1993 , astrophys .j. , * 410 * , l35 .close l. , f. roddier , j. hora , j. graves , m. northcott , c. roddier , w. hoffman , a. doyal , g. fazio , and l. deutsch , 1997 , astrophys . j. , * 489 * , 210 .colavita m. , 1999 , pub ., * 111 * , 111 . colavita m. et al . , 1998 , spie , * 3350 * , 776 . colavita m. et al. , 1999 , astron .j. , * 117 * , 505 .conan j. -m . , l. m. mugnier , t. fusco , v. michau , and g. rousset , 1998 , appl ., * 37 * , 4614 .conan r. , a. ziad , j. borgnino , f. martin , and a. tokovinin , 2000 , spie , * 4006 * , 963 .cooper d. , d. bui , r. bailey , l. kozlowski , and k. vural , 1993 , spie , * 1946 * , 170 .cornwell t. j. , and p. n. wilkinson , 1981 , mon . not .r. astron .soc . , * 196 * , 1067 .coud du foresto v. , g. perrin , and m. boccas , 1995 , astron .astrophys . , * 293 * , 278 .coulman c. e. , 1974 , solar phys . ,* 34 * , 491 .coulman c. e. , 1985 , annu .astrophys . ,* 23 * , 19 .currie d. , k. kissel , e. shaya , p. avizonis , d. dowling , and d. bonnacini , 1996 , the messenger , no . * 86 * , 31 .danchi w. c. , m. bester , c. degiacomi , i. greenhill , and c. townes , 1994 , astron .j. , * 107 * , 1469 .danchi w. c. , p. g. tuthill , and j. d. monnier , 2001 , astrophys .j. ( to appear ) .dantowitz r. , s. teare , and m. kozubal , 2000 , astron .j. , * 119 * , 2455 .davidge t. j. , f. rigaut , r. doyon , and d. crampton , 1997 , astron .j. , * 113 * , 2094 .davidge t. j. , d. a. simons , f. rigaut , r. doyon , e. e. becklin , and d. crampton , 1997 , astron .j. , * 114 * , 2586 .davis j. , and w. j. tango , 1996 , pub ., * 108 * , 456 .davis j. , w. tango , a. booth , and j. obyrne , 1998 , spie , * 3350 * , 726 .davis j. , w. tango , a. booth , t. ten brummelaar , r. minard , and s. owens , 1999a , mon . not .r. astron .soc . , * 303 * , 773. davis j. , w. j. tango , a. j. booth , e. d. thorvaldson , and j. giovannis , 1999b mon . not .r. astron .soc . , * 303 * , 783 .dayton d. , s. sandven , j. gonglewski , s. rogers , s. mcdermott , and s. browne , 1998 , spie , * 3353 * , 139 .dejonghe j. , l. arnold , o. lardire , j. -p berger , c. cazal , s. dutertre , d. kohler , and d. vernet , 1998 , spie , * 3352 * , 603 .denker c. , 1998 , solar phys . , * 81 * , 108 .derie f. , m. ferrai , e. brunetto , m. duchateau , r. amestica , and p. aniol , 2000 , spie , * 4006 * , 99 .dibenedetto g. p. , and y. rabbia , 1987 , astron .astrophys . , * 188 * , 114 .diericks p. , and r. gilmozzi , 1999 , proc .` extremely large telescopes ' , eds . ,t. andersen , a. ardeberg , and r. gilmozzi , 43 .drummond j. , a. eckart , and e. hege , 1988 , icarus , * 73 * , 1 .dyck h. , j. benson , and s. ridgway , 1993 , pub .pac , * 105 * , 610 .dyck h. , g. van belle , and j. benson , 1996 , astron .j. , * 112 * , 294 . ebstein s. , n. p. carleton , and c. papaliolios , 1989 , astrophys .j , * 336 * , 103 . eke v. , 2001 , mon . not .r. astron .soc . , * 320 * , 106 .elias n. m. , 2001 , astrophys .j. , * 549 * , 647 .falcke h. , k. davidson , k. -h .hofmann , and g. weigelt , 1996 , astron .astrophys . * 306 * , l17 .faucherre m. , d. bonneau , l. koechlin , and f. vakili , 1983 , astron .* 120 * , 263 .fienup j. r. , 1978 , opt .lett . , * 3 * , 27 .fischer o. , b. stecklum , and c. leinert , 1998 , astron .astrophys . , * 334 * , 969 . fizeau h. , 1868 , c. r. acad .paris , * 66 * , 934 .fomalont e. b. , and m. c. h. wright , 1974 , in galactic and extra - galactic radio astronomy , eds ., g. l. verschuur , and k. i. kellerman , 256 .foy r. , d. bonneau , and a. blazit , 1985 , astron .astrophys . ,* 149 * , l13 .foy r. , and a. labeyrie , 1985 , astron .astrophys . ,* 152 * , l29 .fried d. l. , 1966 , j. opt .am . , * 56 * , 1372 .fugate r. et al . , 1994 ,a. , * 11 * , 310 .gauger a. , y. y. balega , p. irrgang , r. osterbart , and g. weigelt , 1999 , astron .astrophys . ,* 346 * , 505 .gay j. , and d. mekarnia , 1988 , proc .eso - noao conf . ed . ,f. merkle , eso , frg , 811 .gerchberg r. w. , and w. o. saxton , 1972 , optik , * 35 * , 237 .gies r. , b. mason , w. bagnuolo , m. haula , w. i. hartkopf , h. mcalister , m. thaller , w. mckibben , and l. penny , 1997 , astrophys .j. , * 475 * , l49 .glindemann a. , 1997 , pub .pac , * 109 * , 682 .glindemann a. , r. g. lane , and j. c. dainty , 1991 , proc .` digital signal processing ' , eds . , v. cappellini & a. g. constantinides , 59 .glindemann a. , and f. paresce , 2001 , http://www.eso.org/outreach .golimowski d. a. , t. nakajima , s. r. kulkarni , and b. r. oppenheimer , 1995 , astrophys .j , * 444 * , l101 .gonsalves s. a. , 1982 , opt .* 21 * , 829 .goodman j. w. , 1975 , ` laser speckle and related phenomena ' , ed . , j. c. dainty , springer - verlag , berlin , 9 .goodman j. w. , 1985 , ` statistical optics ' , wiley , n. y. gorham p. w. , 1998 , spie , * 3350 * , 116 .gorham p. , w. folkner , and g. blackwood , 1999 , asp conf ., * 194 * , eds .s. unwin , and r. stachnik , isbn : 1 - 58381 - 020-x , 359 .greenwood d. p. , 1977am . , * 67 * , 390 . grieger f. , f. fleischman , and g. weigelt , 1988 , proc. eso - noao conf .f. merkle , eso , frg , 225 .haguenauer p. , m. sevei , i. schanen - duport , k. rousselet - perraut , j. berger , y. duchne , m. lacolle , p. kern , f. melbet , and p. benech , 2000 , spie , * 4006 * , 1107 .hajian a. et al . , 1998 ,j. , * 496 * , 484 .hale d. , m. bester , w. danchi , w. fitelson , s. hoss , e. lipman , j. monnier , p. tuthill , and c. townes , 2000 , astrohys , j. , * 537 * , 998 .hale d. et al . , 1997 ,j. , * 490 * , 826 .hanbury brown r. , 1974 , ` the intensity interferometry , its applications to astronomy ' , taylor & francis , london .hanbury brown r. , and r. twiss , 1958 , proc .a , * 248 * , 222 .hanbury brown r. , r. c. jennison , and m. k. das gupta , 1952 , nature , * 170 * , 1061 .haniff c. , m. scholz , and p. tuthill , 1995 , mon . not .r. astron .soc . , * 276 * , 640 .et al . , 1996 , astron .astrophys . , * 312 * , 879 . hartkopf w. i. , h. a. mcalister , and b. d. mason , 1997 , chara contrib4 , ` third catalog of interferometric measurements of binary stars ' , w.i . hartley m. , b. mcinnes , and f. smith , 1981 , q. j. astr ., * 22 * , 272 .harvey j. w. , 1972 , nature , * 235 * , 90 .harvey j. w. , and j. b. breckinridge , 1973 , astrophys .j. , * 182 * , l137. hawley s. a. , and j. s. miller , 1977 , astrophys .j , * 212 * , 94 .hege e. , e. hubbard , p. strittmatter , and s. worden , 1981 , astrophys .j. , * 248 * , 1 .hestroffer d. , 1997 , astron .astrophys . ,* 327 * , 199 .heydari m. , and j. beuzit , 1994 , astron .astrophys . ,* 287 * , l17 .hickson p. , 2001, private communication .hill j. m. , 2000 , spie , * 4004 * , 36 .hinz p. , r. angel , w. hoffmann , d. mccarthy , p. mcguire , m. cheselka , j. hora , and n. woolf , 1998 , nature , * 395 * , 251 .hinz p. , w. hoffmann , and j. hora , 2001 , astrophys .( to appear ) .the hipparcos catalogue , 1997 , esa , sp-1200 .hofmann k. -h . , w. seggewiss , and g. weigelt , 1995 , astron .astrophys . , * 300 * , 403 . hgbom j. , 1974 , astron .suppl . , * 15 * , 417 .hu w. , 2001 , astro - ph/0105117 hummel c. , d. mozurkevich , j. armstrong , a. hajian , n. elias , and d. hutter , 1998 , astron . j. , * 116 * , 2536 .hutchings j. , d. crampton , s. morris , d. durand , and e. steinbring , 1999 , astron .j. , * 117 * , 1109 .hutchings j. , d. crampton , s. morris , and e. steinbring , 1998 , pub .pac , * 110 * , 374 .hutchings j. , s. morris , and d. crampton , 2001 , astron .j. , * 121 * , 80 .ishimaru a. , 1978 , ` wave propagation and scattering in random media ' , academic press , n. y. jefferies s. , and j. christou , 1993 , astrophys .j , * 415 * , 862 .jennison r. c. , 1958 , mon . not .r. astron .soc . , * 118 * , 276 . karovska m. , l. koechlin , p. nisenson , c. papaliolios , and c. standley , 1989 , astrophys .j , * 340 * , 435 .kenworthy m. et al . , 2001 ,( to appear ) .kervella p. , v. coud du foresto , g. perrin , m. schller , w. traub , and m. lacasse , 2001 , astron .astrophys . ,* 367 * , 876 .knox k. , and b. thompson , 1974 , astrophys .j , * 193 * , l45 .koechlin l. , p. r. lawson , d. mourard , a. blazit , d. bonneau , f. morand , p. stee , i. tallon - bosc , and f. vakili , 1996 , appl ., * 35 * , 3002 .kolmogorov a. , 1941a , in ` turbulence ' , eds ., s. k. friedlander & l. topper , 1961 , wiley - interscience , n. y. , 151 .kolmogorov a. , 1941b , in ` turbulence ' , eds ., s. k. friedlander & l. topper , 1961 , wiley - interscience , n. y. , 156 .kolmogorov a. , 1941c , in ` turbulence ' , eds ., s. k. friedlander & l. topper , 1961 , wiley - interscience , n. y. , 159 .korff d. , 1973 , j. opt .* 63 * , 971 .kunz m. , a. banday , p. castro , p. ferreira , and k. grski , 2001 , astrophys .j. lett . , ( to appear ) .labeyrie a. , 1970 , astron .astrophys . ,* 6 * , 85 .labeyrie a. , 1975 , astrophys .j , * 196 * , l71 .labeyrie a. , 1995 , astron .astrophys . , * 298 * , 544 . labeyrie a. , 1996 , astron .suppl . , * 118 * , 517 . labeyrie a. , 1998 , spie , * 3350 * , 960. labeyrie a. , 1999a , asp conf ., * 194 * , eds .s. unwin , and r. stachnik , 350 .labeyrie a. , 1999b , science , * 285 * , 1864 .labeyrie a. , 2000 , private communication .labeyrie a. , 2001 , private communication .labeyrie a. , l. koechlin , d. bonneau , a. blazit , and r. foy , 1977 , astrophys . j. , * 218 * , l75 .labeyrie a. , g. lamaitre , and l. koechlin , 1986 , spie , * 628 * , 323 .labeyrie a. , g. schumacher , m. dugu , c. thom , p. bourlon , f. foy , d. bonneau , and r. foy , 1986 , astron .astrophys . ,* 162 * , 359 .lai o. , d. rouan , f. rigaut , r. arsenault , and e. gendron , 1998 , astron .astrophys . ,* 334 * , 783 .lai o. , d. rouan , f. rigaut , f. doyon , and f. lacombe , 1999 , astron .astrophys . ,* 351 * , 834 .lane b. , m. kuchner , a. boden , m. crooch - eakman , and s. r. kulkarni , 2000 , nature , * 407 * , 485 .lannes a. , e. anterrieu , and k. bouyoucef , 1994 , j. mod ., * 41 * , 1537 .lawson p. r. , 1994 , pub ., * 106 * , 917 .lawson p. r. , 1995 , j. opt .am a. , * 12 * , 306 .lawson p. , j. baldwin , p. warner , r. boysen , c. haniff , j. rogers , d. saint - jacques , d. wilson , and j. young , 1998 , spie , * 3350 * , 753 .lawson p. , m. colavita , p. dumont , and b. lane , 2000 , spie , * 4006 * , 397 .ledoux c. , b. thodore , p. petitjean , m. n. bremer , g. f. lewis , r. a. ibata , m. j. irwin , and e. j. totten , 1998 , astron .astrophys . ,* 339 * , l77 . lee j. , b. bigelow , d. walker , a. doel , and r. bingham , 2000 , pub .pac , * 112 * , 97 .lefvre h. c. , 1980 , electron .* 16 * , 778 .lger a. , m. pirre , and f. j. marceau , 1993 , astron .astrophys . ,* 277 * , 309 .liang j. , d. r. williams , and d. t. miller , 1997 , j. opt .a , * 14 * , 2884 .lindengren l. , and m. a. c. perryman , 1996 , astron .suppl . , * 116 * , 579. linfield r. , and p. gorham , 1999 , asp conf ., * 194 * , eds .s. unwin , and r. stachnik , isbn : 1 - 58381 - 020-x , 224 .lloyd - hart m. , 2000 , pub .pac , * 112 * , 264 .lloyd - hart m. , j. r. angel , t. groesbeck , t. martinez , b. jacobsen , b. mcleod , d. mccarthy , e. hooper , e. hege , and d. sandler , 1998 , astrophys . j. , * 493 * , 950 .lipman e. , m. bester , w. danchi , and c. townes , 1998 , spie , * 3350 * , 933 .liu y. c. , and a. w. lohmann , 1973 , opt .* 8 * , 372 .lohmann a. , g. weigelt , and b. wirnitzer , 1983 , appl ., * 22 * , 4028 .lopez b. et al . , 1997 ,j. , * 488 * , 807 .lopez b. , 1991 , ` last mission at la silla , april 19 may 8 , on the measure of the wavefront evolution velocity ' , eso internal report. lovelock j. e. , 1965 , nature , * 207 * , 568 .lucy l. , 1974 , astron .j. , * 79 * , 745 .lynds c. , s. worden , and j. harvey , 1976 , astrophys .j , * 207 * , 174 .machida y. et al . , 1998 ,spie , * 3350 * , 202 .macintosh b. , c. max , b. zuckerman , e. becklin , d. kaisler , p. lowrence , a. weinberger , j. christou , g. schneider , and s. acton , 2001 , astro - ph/0106479 .mackay c. d. , r. n. tubbs , r. bell , d. burt , p. jerram , and i. moody , 2001 , spie , * 4306 * , ( in press ) .magain p. , f. courbin , and s. sohy , 1998 , astrophys .j. , * 494 * , 472 .malbet f. et al . , 1998 ,j. , * 507 * , l149 .marco o. , t. encrenaz , and e. gendron , 1997 , planet sp ., * 46 * , 547 .mrquez i. , p. petitjean , b. thodore , m. bremer , g. monnet , and j. beuzit , 2001 , astron .astrophys . , * 371 * , 97. masciadri e. , j. vernin , and p. bougeault , 1999 , astron .* 137 * , 203 .mason b. d. , 1996 , astron .j. , * 112 * , 2260 .mason b. , c. martin , w. i. hartkopf , d. barry , m. germain , g. douglass , c. worley , g. wycoff , t. ten brummelaar , and o. franz , 1999 , astron . j. , * 117 * , 1890 . mayor m. , and d. queloz , 1995 , nature , * 378 * , 355 .mcalister h. , w. bagnuolo , t. ten brummelaar , w. i. hartkopf , m. shure , l. sturmann , n. turner , and s. ridgway , 1998 , spie , * 3350 * , 947 .mendel l. , and e. wolf , 1995 , ` optical coherence and quantum optics ' , cambridge university press , cambridge .mennesson b. , j. -m .mariotti , v. coud du foresto , g. perrin , s. ridgway , c. ruilier , w. traub , m. lacasse , and g. maz , 1999 , astron .astrophys . , * 346 * , 181. menshchikov a. , and t. henning , 1997 , astron .astrophys . ,* 318 * , 879 .michelson a. a. , 1891 , nature , * 45 * , 160 .michelson a. , and f. pease , 1921 , astrophys .j , * 53 * , 249 .millan - gabet r. , p. schloerb , and w. traub , 2001 , astrophys .j. , * 546 * , 358 . monnier j. , w. danchi , d. hale , p. tuthill , and c. townes , 2000 , astrophys .j , * 543 * , 868 .monnier j. , et al ., 2001 , aas meeting , * 198 * , 63.02 .monnier j. , p. tuthill , b. lopez , p. cruzalbes , w. danchi , c. haniff , 1999 , astrophys .j , * 512 * , 351 .morel s. , 2000 , private communication .morel s. , and l. koechlin , 1998 , spie , * 3350 * , 257 .morel s. , w. traub , j. bregman , r. mah , and c. wilson , 2000 , spie , * 4006 * , 506 .mouillet d. , j. d. larwood , j. c. papaloizou , and a. m. lagrange , 1997 , mon . not .r. astron ., * 292 * , 896 .mourard d. , d. bonneau , l. koechlin , a. labeyrie , f. morand , p. stee , i. tallon - bosc , and f. vakili , 1997 , astron .astrophys . , * 317 * , 789 . mourard d. , i. bosc , a. labeyrie , l. koechlin , and s. saha , 1989 , nature , * 342 * , 520. mozurkewich d. , k. johnston , r. simon , d. hutter , m. colavita , m. shao , and x. pan , 1991 , astron .j. , * 101 * , 2207 .nakajima t. , 1994 , astrophys j. , * 425 * , 348 .nakajima t. , and d. golimowski , 1995 , astron .j. , * 109 * , 1181 .natta a. , t. prusti , r. neri , d. wooden , and v. grinin , 2001 , astron .astrophys . ,* 371 * , 186 .nelkin m. , 2000 , am .j. phys . ,* 68 * , 310 .nisenson p. , and c. papaliolios , 1999 , astrophys .j , * 518 * , l29 .nisenson p. , c. papaliolios , m. karovska , and r. noyes , 1987 , astrophys .j , * 320 * , l15 .nordgren t. , j. armstrong , m. gierman , r. hindsley , a. hajian , j. sudol , and c. hummel , 2000 , astrophys .j. , * 543 * , 972 . northcott m. j. , g. r. ayers , and j. c. dainty , 1988 , j. opt .a , * 5 * , 986 .nota a. , c. leitherer , m. clampin , p. greenfield , and d. a. golimowski , 1992 , astron . j. , * 398 * , 621 .nulsen p. , p. wood , p. gillingham , m. bessel , m. dopita , and c. mccowage , 1990 , astrophys .j , * 358 * , 266 .osterbart r. , y. balega , g. weigelt , and n. langer , 1996 , proc .180 , eds ., h. habing & g. lamers , kluwer academic pub .netherlands , 362 .padilla c. , v. karlov , l. matson , k. soosaar , and t. ten brummelaar , 1998 , spie , * 3350 * , 1045 .papaliolios c. , m. karovska , l. koechlin , p. nisenson , c. standley , and s. heathcote , 1989 , nature , * 338 * , 565 .papaliolios c. , p. nisenson , and s. ebstein , 1985 , appl ., * 24 * , 287 .pauls t. , d. mozurkewich , j. armstrong , c. hummel , j. benson , and a. hajian , 1998 , spie , * 3350 * , 467 .paxman r. , t. schulz , and j. fienup , 1992 , j. opt .* 9 * , 1072 .paxman r. , j. seldin , m. lfdahl , g. scharmer , and c. keller , 1996 , astrophys .j. , * 466 * , 1087 .pedretti e. , and a. labeyrie , 1999 , astron .suppl . , * 137 * , 543 .pedretti e. , a. labeyrie , l. arnold , n. thureau , o. lardire , a. boccaletti , and p. riaud , 2000 , astron .suppl . , * 147 * , 285 .pehlemann e. , k. -h hofmann , and g. weigelt , 1992 , astron .astrophys . , * 256 * , 701 .penny a. , a. lger , j. mariotti , c. schalinski , c. eiora , r. laurance , m. fridlund , 1998 , spie , * 3350 * , 666 .perrin g. , 1997 , astron .* 121 * , 553 .perrin g. , v. coud du foresto , s. ridgway , j. -m .marrioti , w. traub , n. carleton , m. lacasse , 1998 , astron .astrophys . ,* 331 * , 619 .perrin g. , v. coud du foresto , s. ridgway , b. mennesson , c. ruilier , j. -m .marrioti , w. traub , and m. lacasse , 1999 , astron .astrophys . ,* 345 * , 221 .perryman m. a. c. , 1998 , nature , * 340 * , 111 . poulet f. , and b. sicardy , 1996 , bull .soc . , * 28 * , 1124 .pourbaix d. , 2000 , astron .suppl . , * 145 * , 215 . prieur j. , e. oblak , p. lampens , m. kurpinska - winiarska , e. aristidi , l. koechlin , and g. ruymaekers , 2001 , astron .astrophys . , * 367 * , 865. puetter r. , and a. yahil , 1999 , astro - ph/9901063 .quirrenbach a. , 2001 , annu .astrophys . ,* 39 * , 353 .quirrenbach a. , d. mozurkewich , d. buscher , c. hummel , and j. armstrong , 1996 , astron .astrophys . , * 312 * , 160 .quirrenbach a. , j. roberts , k. fidkowski , w. de vries , and w. van breugel , 2001 , astrophys .j. ( to appear ) .rabbia y. , d. mekarnia , and j. gay , 1990 , spie , * 1341 * , 172 .racine r. , 1984 , iau colloq .79 , eds . ,m. ulrich & kjr , 235 .racine r. , g. herriot , and r. mcclure , 1996 , proc .` adaptive optics ' ed . , m. cullum , eso , germany , 35 .ragazzoni r. , and d. bonaccini , 1996 , proc . ` adaptive optics ' , ed . , m. cullum , 17 .ragazzoni r. , e. marchetti , and g. valente , 2000 , nature , * 403 * , 54 .reinheimer t. , and g. weigelt , 1987 , astron .astrophys . , * 176 * , l17 .richardson w. h. , 1972 , j. opt .* 62 * , 55 .ridgway s. t. , and f. roddier , 2000 , spie , * 4006 * , 940 .rimmele t. r. , 2000 , spie , * 4007 * , 218 .robbe s. , b. sorrente , f. cassaing , y. rabbia , and g. rousset , 1997 , astron .. suppl . * 125 * , 367 .robertson n. a. , 2000 , class .grav . , * 17 * , 19 .roddier c. , and f. roddier , 1988 , proc .nato - asi workshop , eds ., d. m. alloin & j. -m .mariotti , 221 .roddier c. , f. roddier , m. j. northcott , j. e. graves , and k. jim , 1996 , astrophys .j , * 463 * , 326 .roddier f. , 1981 , progress in optics , * xix * , 281 .roddier f. , 1988 , phys . rep ., * 170 * , 97 .roddier f. , 1999 , ` adaptive optics in astronomy ' , ed ., f. roddier , cambridge univ . press .roddier f. , c. roddier , a. brahic , c. dumas , j. graves , m. northcott , and t. owen , 1997 , planet sp ., * 45 * , 1031 . roddier f. , c. roddier , j. e. graves , and m. j. northcott , 1995 , astrophys. j , * 443 * , 249 .roggemann m. c. , b. m. welsh , and r. q. fugate , 1997 , rev .modern phys . ,* 69 * , 437 .rouan d. , d. field , j. -l .lemaire , o. lai , g. p. forts , e. falgarone , and j. -m .deltorn , 1997 , mon . not .r. astron .soc . , * 284 * , 395 . rouan d. , p. riaud , a. boccaletti , y. clnet , and a. labeyrie , 2000 , pub .pac . , * 112 * , 1479 .rouan d. , f. rigaut , d. alloin , r. doyon , o. lai , d. crampton , e. gendron , and r. arsenault , 1998 , astron .astrophys . , * 339 * , 687 . rousset g. , 1999 , ` adaptive optics in astronomy ' , ed ., f. roddier , cambridge univ .press , 91 .rousset g. , j. c. fontanella , p. kem , p. gigan , f. rigaut , p. lna , p. boyer , p. jagourel , j. p. gaffard , and f. merkle , 1990 , astron .astrophys . ,* 230 * , l29 .rousselet - perraut k. , f. vakili , and d. mourard , 1996 , opt .* 35 * , 2943 .ryan s. , and p. wood , 1995 , pub .austr . , * 12 * , 89 . saha s. k. , 1999 , bullind . , * 27 * , 443 . saha s. k. , and v. chinnappan , 1999 , bullind . , * 27 * , 327 . saha s. k. , and d. maitra , 2001 , ind. j. phys . , * 75b * , 391 . saha s. k. , and s. morel , 2000 , bull .ind . , * 28 * , 175 . saha s. k. , r. rajamohan , p. vivekananda rao , g. som sunder , r. swaminathan , and b. lokanadham , 1997 , bullind . , * 25 * , 563 . saha s. k. , r. sridharan , and k. sankarasubramanian , 1999 , ` speckle image reconstruction of binary stars ' , presented at xix asi meeting , bangalore . saha s. k. , and p. venkatakrishnan , 1997 , bullind . , * 25 * , 329. sams b. j. , k. schuster , and b. brandl , 1996 , astrophys .j. , * 459 * , 491 .sato k. et al . , 1998 ,spie , * 3350 * , 212 .schertl d. , k. -h .hofmann , w. seggewiss , and g. weigelt , 1996 , astron .astrophys . , * 302 * , 327 . schller m. , w. brandner , t. lehmann , g. weigelt , and h. zinnecker , 1996 , astron .astrophys . , * 315 * , 445 . seldin j. , r. paxman , and c. keller , 1996 , spie . , * 2804 * , 166 . seldin j. , and r. paxman , 1994 , spie . , * 2302 * , 268. serabyn e. , 2000 , spie , * 4006 * , 328 .shannon c. j. , 1949 , proc .ire , * 37 * , 10 .shao m. , and m. m. colavita , 1992 , astron .astrophys . , * 262 * , 353 . shao m. , and m. m. colavita , 1994 , proc .158 , eds ., j. g. robertson and w. j. tango , 413 .shao m. , and d. staelin , 1977 , j. opt .am . , * 67 * , 81 . shao m. et al . , 1988 , astron. astrophys . , * 193 * , 357 . sicardy b. , f. roddier , c. roddier , e. perozzi , j. e. graves , o. guyon , and m. j. northcott , 1999 , nature , * 400 * , 731 .simon m. , l. close , and t. beck , 1999 , astron .j. , * 117 * , 1375 .stee p. , de arajo , f. vakili , d. mourard , i. arnold , d. bonneau , f. morand , and i. tallon - bosc , 1995 , astron .astrophys . , * 300 * , 219 .stee p. , f. vakili , d. bonneau , and d. mourard , 1998 , astron .astrophys . ,* 332 * , 268 .tallon m. , r. foy , and a. blazit , 1988 , proc .eso conf . ed .ulrich , eso , frg , 743 .tango w. j. , and r. q. twiss , 1980 , prog ., * 17 * , 239 .tatarski v. i. , 1967 , ` wave propagation in a turbulent medium ' , dover , new york .tatarski v. i. , 1993 , j. opt .a , * 56 * , 1380 .taylor g. l. , 1921 , in ` turbulence ' , eds ., s. k. friedlander & l. topper , 1961 , wiley - interscience , new york , 1 .thom c. , p. granes , and f. vakili , 1986 , astron .astrophys . ,* 165 * , l13 .thompson l. a. , and c. s. gardner , 1988 , nature , * 328 * , 229 .timothy j. g. , 1993 , spie . , * 1982torres g. , r. stefanik , and d. latham , 1997 , astrophys .j , * 485 * , 167 .townes c. h. , m. bester , w. danchi , d. hale , j. monnier , e. lipman , a. everett , p. tuthill , m. johnson , and d. walters , 1998 , spie , * 3350 * , 908 .traub w. a. , 1986 , appl ., * 25 * , 528 .traub w. a. , 2000 , course notes on ` principles of long baseline interferometry ' , ed ., p. r. lawson , 31 .traub w. a. et al . , 2000 , spie , * 4006 * , 715 .traub w. a. , r. millam - gabet , and m. garcia , 1998 , bull .soc . , * 193 * , 52.06 .troxel s. e. , b. m. welsh , and m. c. roggemann , 1994 , j. opt .a , * 11 * , 2100 .tuthill p. , c. haniff , and j. baldwin , 1997 , mon . not .r. astron .* 285 * , 529 .tuthill p. , c. haniff , and j. baldwin , 1999 , mon . not .r. astron .* 306 * , 353 .tuthill p. g. , j. d. monnier , and w. c. danchi , 1999 , nature , * 398 * , 487 .tuthill p. g. , j. d. monnier , and w. c. danchi , 2001 , nature , * 409 * , 1012 .tuthill p. g. , j. d. monnier , w. c. danchi , and wishnow , 2000 , pub ., * 116 * , 2536 .ulrich m. , l. maraschi , and c. urry , 1997 , annu .astrophys . ,* 35 * , 445 .unwin s. , s. turyshev , and m. shao , 1998 , spie , * 3350 * , 551 .vakili f. , d. mourard , d. bonneau , f. morand , and p. stee , 1997 , astron .astrophys . , * 323 * , 183 . vakili f. , d. mourard , p. stee , d. bonneau , p. berio , o. chesneau , n. thureau , f. morand , a. labeyrie , and i. tallon - bosc , 1998 , astron .astrophys . ,* 335 * , 261 .van belle g. et al ., 1999 , astron .j. , * 117 * , 521 .van belle g. , d. ciardi , r. thompson , r. akeson , and e. a. lada , 2001 , astro - ph/0106184 .van belle g. , h. dyck , j. benson , and m. lacasse , 1996 , astron .j. , * 112 * , 2147 .van belle g. , h. dyck , r. thomson , j. benson , and s. kannappan , 1997 , astron .j. , * 114 * , 2150 .von der lhe o. , 1984 , j. opt .a , * 1 * , 510 .walkup j. f. , and j. w. goodman , 1973 , j. opt .am , * 63 * , 399 .wallace j. et al . , 1998 ,spie , * 3350 * , 864 .wehinger p. a. , 2001 , private communication .weigelt g. , 1977 , opt .communication , * 21 * , 55 . weigelt g. , and g. bair , 1985 , astron .astrophys . ,* 150 * , l18 .weigelt g. , y. balega , t. blcker , a. fleischer , r. osterbart , and j. winters , 1998 , astron ., * 333 * , l51 .weigelt g. , y. balega , k. -h .hofmann , and m. scholz , 1996 , astron .astrophys . ,* 316 * , l21 .weigelt g. , and j. ebersberger , 1986 , astron .astrophys . ,* 163 * , l5 .weinberger a. , g. neugebauer , and k. matthews , 1999 , astron .j. , * 117 * , 2748 .wilken v. , c. r. de boer , c. denker , and f. kneer , 1997 , astron .astrophys . ,* 325 * , 819 .wilson r. , j. baldwin , d. busher , and p. warner , 1992 , mon . not .r. astron .* 257 * , 369 .wittkowski m. , y. balega , t. beckert , w. duschi , k. hofmann , and g. weigelt , 1998 , astron .astrophys . ,* 329 * , l45 .wittkowski m. , y. balega , k. hofmann , and g. weigelt , 1999 , mitteilungen der astronomischen gesellschaft ( agm ) ., * 15 * , 83 .wittkowski m. , c. hummel , k. johnston , d. mozurkewich , a. hajian , and n. white , 2001 , astron .astrophys . ,* 377 * , 981 .wittkowski m. , n. langer , and g. weigelt , 1998 , astron ., * 340 * , l39 .wood p. , m. bessel , and m. dopita , 1986 , astrophys .j , * 311 * , 632 .wood p. , p. nulsen , p. gillingham , m. bessel , m. dopita , and c. mccowage , 1989 , astrophys .j , * 339 * , 1073 .worden s. p. , c. r. lynds , and j. w. harvey , 1976 , j. opt .am . , * 66 * , 1243 .wyngaard j. c. , y. izumi , and s. a. collins , 1971 , j. opt . soc .am . , * 60 * , 1495 .young a. t. , 1974 , astrophys .j. , * 189 * , 587 .young j. et al . , 2000 ,mon . not .r. astron .soc . , * 315 * , 635 .zago l. , 1995 , http://www.eso.org/gen-fac/pubs/astclim/lz-thesis/node4-html . | the present ` state of the art ' and the path to future progress in high spatial resolution imaging interferometry is reviewed . the review begins with a treatment of the fundamentals of stellar optical interferometry , the origin , properties , optical effects of turbulence in the earth s atmosphere , the passive methods that are applied on a single telescope to overcome atmospheric image degradation such as speckle interferometry , and various other techniques . these topics include differential speckle interferometry , speckle spectroscopy and polarimetry , phase diversity , wavefront shearing interferometry , phase - closure methods , dark speckle imaging , as well as the limitations imposed by the detectors on the performance of speckle imaging . a brief account is given of the technological innovation of adaptive - optics ( ao ) to compensate such atmospheric effects on the image in real time . a major advancement involves the transition from single - aperture to the dilute - aperture interferometry using multiple telescopes . therefore , the review deals with recent developments involving ground - based , and space - based optical arrays . emphasis is placed on the problems specific to delay - lines , beam recombination , polarization , dispersion , fringe - tracking , bootstrapping , coherencing and cophasing , and recovery of the visibility functions . the role of ao in enhancing visibilities is also discussed . the applications of interferometry , such as imaging , astrometry , and nulling are described . the mathematical intricacies of the various ` post - detection ' image - processing techniques are examined critically . the review concludes with a discussion of the astrophysical importance and the perspectives of interferometry . # 10=-.025em0 - 0 .05em0 - 0 -.025em.0433em0 |
many simulation problems in finance and other applied fields can be written in the form , where is a measurable function on and is a standard normal vector , that is , is jointly normal with and .it is a trivial observation of surprisingly big consequences that for every orthogonal transform .while this reformulation does not change the simulation problem from the probabilistic point of view , it does sometimes make a big difference when quasi - monte carlo simulation is applied to generate the realizations of .examples are supplied by the well - known brownian bridge and pca constructions of brownian paths which will be detailed in the following paragraphs .assume that one wants to know where is a brownian motion with index set ] and pays otherwise .we intend to price the option in a discrete black - scholes model , where the path of the stock is given by with with current stock price , interest rate , volatility , brownian path where and standard normal vector .hence , the payoff function of the digital barrier option is which leads us to an integration problem of the form .we can use algorithm [ alg : main ] for solving this problem and therefore , we have to compute for . in the appendixwe show how to calculate this expectation for a function depending on the maximum of a brownian motion with drift .we can adjust our problem by with , and .with we get that the vector in algorithm [ alg : main ] can be approximated by where is given by ( [ eq : summation - matrix ] ) , with , and .the computation of with can be reduced to a -dimensional integration problem using ( [ eq : refl3 ] ) with and and formula ( [ eq : refl1 ] ) with simplifies .consequently , we end up with -dimensional integrals which can be evaluated efficiently with an adaptive quadrature rule . for the numerical test we use a sobol sequence of dimension with a random shift and the parameter setis chosen as and . the number of sample paths ranges from to and we compute the standard deviation for those based on batches . since it is not clear how to apply the lt method of imai and tan to barrier options , we compare the regression method with the forward method and the pca construction only . in figure[ fig : barrier ] we can observe that the difference between the forward method and the pca is smaller than in the previous examples .furthermore , we see that the regression method is slightly behind the pca , but this seems to be the best we can achieve by linear approximation. runs on a -scale , width=298,height=207 ] runs on a -scale , width=298,height=207 ] the last example we provide is an asian ( up - and - in ) barrier option by which we mean that the payoff of the option is similar to an asian option as in the first numerical example , but is paid only if the underlying asset breaks through an upper barrier . the corresponding functionis then given by where is as in ( [ eq : discretestock ] ) for .since the function is of the form with , and , we apply algorithm [ alg : general ] with to the problem .the computation of the vectors and is already discussed in the examples above , i.e. is related to the digital barrier option and corresponds to the asian option .the numerical test is based on batches and we again compare the standard deviation of the forward method , the pca construction and the regression method for various numbers of sample paths , ranging from to .moreover , we use a sobol sequence in dimension with a random shift and the parameters are and .figure [ fig : barrier ] shows that the regression method yields slightly better results than the pca and that the forward method is behind the other two approaches .we give the computations needed for examples of barrier type , that is we want to compute where is some function of the maximum of a discrete brownian path with drift , i.e. and where .we make the approximation where denotes brownian motion with drift , i.e. , . moreover , let and .at first we compute for given and measurable with .then we show how the expectation for more general can be computed using the first result .we start with a simple calculation for a brownian motion with drift 0 and let .for we get , using the reflection principle for brownian motion , next we make a girsanov - type change of measure such that under the new measure the brownian motion with drift becomes a standard brownian motion .so with , that is the next step is to consider for .let denote the standard filtration of . we have already computed the first term . for the second termwe note that by the markov property of brownian motion , we can use our earlier result ( [ eq : refl1 ] ) with to obtain let us write .then , using ( [ eq : refl1 ] ) and ( [ eq : refl2 ] ) we obtain note that the expectations can be computed explicitly for suitable . p. lecuyer and d. munger . on figures of merit for randomly - shifted lattice rules . in h.woniakowski and l. plaskota , editors , _ monte carlo and quasi monte carlo methods 2010 _ , berlin , 2012 .springer.to appear .p. lecuyer , j .- s .parent - chartier , and m. dion .simulation of a lvy process by pca sampling to reduce the effective dimension . in _ proceedings of the 2008 winter simulation conference _ , pages 436443 .ieee press , 2008 . | there are a number of situations where , when computing prices of financial derivatives using quasi - monte carlo ( qmc ) , it turns out to be beneficial to apply an orthogonal transform to the standard normal input variables . sometimes those transforms can be computed in time for problems depending on input variables . among those are classical methods like the brownian bridge construction and principal component analysis ( pca ) construction for brownian paths . building on preliminary work by imai & tan as well as wang & sloan , where the authors try to find optimal orthogonal transform for given problems , we present how those transforms can be approximated by others that are fast to compute . we further present a new regression - based method for finding a householder reflection which turns out to be very efficient for a wide range of problems . we apply these methods to several very high - dimensional examples from finance . |
recently chaotic dynamical systems are very popular in science and engineering . besides the original definition of li - yorke chaos in ,there have been various definitions for `` chaos '' in the literature , and the most often used one is given by devaney in .although there is no universal definition for chaos , the essential feature of chaos is sensitive dependence on initial conditions so that the eventual behavior of the dynamics is unpredictable .the theory and methods of chaotic dynamical systems have been of fundamental importance not only in mathematical sciences , but also in physical , engineering , * biological * , and even economic sciences . in this paper ,a chaos would be understood in the sense of li - yorke ( see ) . in this short communication , we introduce and examine a family of nonlinear discrete dynamical systems that naturally occurs to describe a transmission of a trait from parents to their offspring . here , we shall present some essential analytic and numerical results on dynamics of such models by avoiding some technical parts of the proofs . in upcoming papers , we shall explore the deepest investigation on asymptotic behaviors of the family of nonlinear dynamical systems by providing the proofs in details . as the first example, we consider a mendelian inheritance of a single gene with two alleles and ( see ) .let an element represent a gene pool for a population and it have been expressed a linear combination of the alleles and where , and then , are the percentage of the population which carries the alleles and respectively . the rules of the mendelian inheritance indicate that the next generation represents a gene pool for the population which carries the alleles and respectively , where here , ( resp . ) is the probability that the child receives the allele ( resp . ) from parents with the allele ; ( resp . ) is the probability that the child receives the allele ( resp . ) from parents with the alleles and respectively ; and ( resp . ) is the probability that the child receives the allele ( resp . ) from parents with allele .it is evident that thus , is a nonlinear dynamical system acting on the one dimensional symplex that describes the distribution of the next generation which carries the alleles and respectively , if the distribution of the current generation are known .recall that in the mendelian inheritance case , i.e. , and , the dynamical system has the following form we assume that prior to a formation of a new generation each gene has a possibility to mutate , that is , to change into a gene of the other kind . specifically , we assume that for each gene the mutation occurs with probability and occurs with probability we assume that `` _ _ the mutation occurs if only if both parents have the same allele _ _ '' . then , we have that , and the dynamical system has the following form an operator given by is called a quadratic stochastic operator .we introduce some standard terms in the theory of a discrete dynamical system . a sequence is called a trajectory of starting from an initial point where for any .recall that a point is called a fixed point of if we denote a set of all fixed points by .a dynamical system is called regular if a trajectory converges for any initial point .note that if is regular , then the limiting point is a fixed point of thus , in a regular system , the fixed point of dynamical system describes a long run behavior of the trajectory of starting from any initial point .the biological treatment of the regularity of dynamical system is rather clear : in a long run time , the distribution of species in the next generation coincide with distribution of species in the current generation , i.e. , stable .a fixed point set and an omega limiting set of quadratic stochastic operators ( qso ) were deeply studied in - , and they play an important role in many applied problems . in the paper , it was given a long self - contained exposition of recent achievements and open problems in the theory of quadratic stochastic operators .a dynamical system is said to be ergodic if the limit exists for any . based on some numerical calculations, s.ulam has conjectured that any qso acting on the finite dimensional simplex is ergodic .however , in , m.zakharevich showed that , in general , ulam s conjecture is false .namely , m.zakharevich showed that the following qso is not ergodic in , zakharevich s result was generalized in the class of volterra qso .moreover , in , it was given a necessary and sufficient condition of being non - ergodicity of volterra qso defined on we define the -th order cesaro mean by the following formula where and .it is clear that the first order cesaro mean is nothing but . in this manner ,zakharevich s result says that the first order cesaro mean of the trajectory of operator diverges for any initial point except fixed points .surprisingly , in , it was shown that any order cesaro mean , where , of the trajectory of operator diverges for any initial point except fixed points .this leads to a conclusion that the operator given by might have unpredictable behavior .in fact , in , it was proven that an operator given by exhibits the li - yorke chaos .it is worth pointing out that some other properties of volterra qso were studied in .note that if qso is regular , then it is ergodic .however , the reverse implication is not always true .it is known that the dynamical system is either regular or converges to a periodic-2 point ( see ) .therefore , in 1d simplex , any qso is ergodic . in other words , a mutation in population systemhaving * a single gene with two alleles * always exhibits an ergodic behavior ( or almost regular ) .it is of independent interest to study a mutation in population system having * a single gene with three alleles*. in the next section , we consider an inheritance of a single gene with three alleles and and show that a nonlinear dynamical system corresponding to the mutation exhibits a non - ergodic behavior ( or li - yorke chaos ) .in this section , we shall derive a mathematical model of a inheritance of a single gene with three alleles . in this case ,an element represents a population if its expression , as a linear combination of the alleles and , satisfies the following conditions and then are the percentage of the population which carries the alleles and respectively .we assume that prior to a formation of a new generation each gene has a possibility to mutate , that is , to change into a gene of the other kind .we assume that the mutation occurs if _ both parents have the same alleles_. specifically , we will consider two types of the simplest mutations : 1 .assume that mutations , , and occur with probability ; 2 .assume that mutations , , and occur with probability then the corresponding dynamical systems are defined on the two - dimensional simplex in the first mutation , we have in the second mutation , we have in both cases , if , i.e. , if a mutation does not occur , then a dynamical systems and coincide with zakharevich s operator . as we already mentioned , zakharevich s operator exhibits a li - yorke chaos ( see ) .let . in the first case ,an operator is a permutation of zakharevich s operator .an omega limiting set of the permutation of zakharevich s operator was studied in . by means of results , , , and , one can show that the operator is non - ergodic as well as a li - yorke chaos .let .in the second case , an operator is a permutation of an operator which was studied in . by applying the same method which was given in , we may show that the operator is regular .it is easy to check that and . in other words , in the first case , a mutation operator is a convex combination of two li - yorke chaotic operators and in the second case , a mutation operator is a convex combination of a li - yorke chaotic and regular operators .both operators and were not studied in either paper or . in the next section, we are going to present some essential analytic and numerical results on dynamics of and , by avoiding some technical parts of the proofs . in upcoming papers , we shall explore the deepest investigation on asymptotic behaviors of and by providing the proofs in details .in this section , we shall study dynamics of the operators and . here , we shall present some pictures of attractors of the operators and . we are aiming to present some analytic results on dynamics of : where and .as we already mentioned , this operator can be written in the following form : for any , where let be a permutation .the proofs of the following results are straightforward .[ permutationandv ] let be a quadratic stochastic operator given by , where .let and be sets of fixed points and omega limiting points of , respectively . then the following statements hold true .* operators and are commutative , i.e. , ; * if then ; * if is a finite set then ; * one has that , for any . we are aiming to study the fixed point set , where .it is worth mentioning that and , where are vertexes of the simplex and is a center of the simplex .recall that a fixed point is non - degenerate if and only if the following determinant is nonzero at the fixed point : [ uniquefixedpoint ] let be a quadratic stochastic operator given by , where .let be a center of the simplex .then the following statements hold true : * all fixed points are non - degenerate ; * one has that .let be a fixed point .one can easily check that if then the above expression is positive .this means that , in the case , all fixed point are non - degenerate .let us consider the case in this case , the above expression is equal to zero if and only if .since , we have that without loss of generality , we may assume that ( see proposition [ permutationandv ] ( i ) ) .let .since , we have that we then obtain that ^ 2+x_3 ^ 2\\ & \leq&x_1 ^ 2+[(1-\alpha)x_2+\alpha x_3]^2+x_3 ^ 2\\ & < & x_1^ 2+x_2 ^ 2+x_3 ^ 2=\frac{1 + 2\alpha^2}{2(1-\alpha+\alpha^2)}\end{aligned}\ ] ] this is a contradiction . in the similar manner , one can have a contradiction whenever .this shows that , in the case , all fixed points are non - degenerate .we want to show that .it is clear that .this means that the operator does not have any fixed point on the boundary of the simplex , i.e. , .moreover , all fixed point are non - degenerate . due to theorem 8.1.4 in , should be odd .on the other hand , due to corollary 8.1.7 in , one has that .proposition [ permutationandv ] , ( iii ) yields that .simple calculations show that .we can easily check a local behavior of the fixed point .[ localbehavioroffixedpoint ] let be a quadratic stochastic operator given by , where .then the following statements hold true : * if then the fixed point is repelling ; * if then the fixed point is non - hyperbolic . we shall separately study two cases and .let be a quadratic stochastic operator given by , where .then is an infinite compact subset of for any .let .since is continuous and , an omega limiting set is a nonempty compact set and , for any .we want to show that is infinite for any since is repelling , we have that .let us pick up any point from the set . since the operator does not have any periodic point, the trajectory of the point is infinite .since is continuous , we have that this shows that is infinite for any it is worth mentioning that the sets of omega limiting points and of both operators and are infinite .however , unlike the operator , we have inclusions and . moreover , both operators and are non - ergodic .let be a quadratic stochastic operator given by , where .then the following statements hold true : * the operator is non - ergodic ; * the operator exhibits a li - yorke chaos .now , we shall study the case the operator takes the following form in this case , the fixed point is non - hyperbolic and the spectrum of the jacobian of the operator at the fixed point is .let us define the following sets we have the following cycles : * * let be an operator given by .one can easily check that the proof the proposition follows from the above equality .let be a quadratic stochastic operator given by .the following statements hold true : * is a lyapunov function ; * the trajectory always converges to the fixed point .let be an operator given by .it follows from that on the other hand , we have that therefore , one has that for any .this means that is decreasing a long the trajectory of .consequently , is a lyapunov function .we know that is a decreasing bounded sequence . therefore , the limit exists .we want to show that .suppose that .it means that . since , we get that on the other hand , since , there exists such that for any one has that this is a contradiction .it shows that .therefore , we want to show that . we know that .it follows from that this means that .this completes the proof .we are going to present some pictures of attractors ( an omega limiting set ) of the operator given by . in the cases and , the operators and have similar behaviors .the trajectories of both operators , and , look as spirals , but one of them is moving by clockwise and another one is moving by anticlockwise . in these cases , we have that and .we are interested in the dynamics of the mutation operator while approaches to from both left and right sides . in order to see some symmetry, we shall provide attractors of and at the same time .if we slightly change from and , then we can see that the omega limiting set splits from the boundary .moreover , in the picture , we can see * `` three vertexes '' * ( it is roughly speaking ) where the trajectory shall spend almost all its time around those points .therefore , we are expecting non - ergodicity of the operator if is near by or .if becomes close to the then we can see some chaotic pictures .we observe from the pictures that , in the cases and , the attractors are the same but different from the orientations .one of them is moving by clockwise and another one is by anticlockwise .if is enough close to then we detect completely different pictures . hereare some pictures for the values of . in this cases ,attractors are disconnected sets and they consist of 6 connected sets .it is very surprising that why the number of connected sets are 6 and why not 5 or 7 .[ figalpha0109 ] : and ,title="fig:",width=151,height=151 ] : and ,title="fig:",width=151,height=151 ] [ figalpha04970503 ] : and ,title="fig:",width=151,height=151 ] : and ,title="fig:",width=151,height=151 ] [ figalpha04990501 ] : and ,title="fig:",width=151,height=151 ] : and ,title="fig:",width=151,height=151 ] [ figalpha0499505005 ] : and ,title="fig:",width=151,height=151 ] : and ,title="fig:",width=151,height=151 ] [ figalpha0499905001 ] : and ,title="fig:",width=151,height=151 ] : and ,title="fig:",width=151,height=151 ] for the operator , the fluctuation point is . in this case , the influence of the chaotic operators and are the same . therefore , the operator becomes regular and ergodic .this completes the numerical study of the operator .we are aiming to present some analytic results on dynamics of : where and .as we already mentioned , this operator can be written in the following form : for any , where as we already stated , the operator is zakharevich s operator and the operator is a permutation of the operator which was studied in . by means of methods which were used in , we can easily prove the following result .let be a quadratic stochastic operator given by with .then the following statements hold true : * the operator has a unique fixed point which is attracting ; * the vertexes of the simplex are 3-periodic points ; * is a lyupanov function ; * the operator is regular in . by means of the same methods and techniques which are used for the operator , we can prove the following results let be a quadratic stochastic operator given by .then it has a unique fixed point , i.e. , .moreover , one has that : * if then the fixed point is repelling ; * if then the fixed point is attracting ; * if then the fixed point is non - hyperbolic . let be a quadratic stochastic operator given by. then the following statements hold true : * if then is an infinite compact set for any ; * if then for any .let be a quadratic stochastic operator given by , where .then the following statements hold true : * the operator is non - ergodic ; * the operator exhibits a li - yorke chaos .we are going to present some pictures of attractors ( an omega limiting set ) of the operator given by . in this cases and , the operator is a chaotic operator and the operator is regular . since , the mutation operator gives a transition from the chaotic behavior to the regular behavior .consequently , we are aiming to find * fluctuation points * in which _ we can see the transition from the chaotic behavior to the regular behavior . _ [ wfigalpha00101 ] : and ,title="fig:",width=151,height=151 ] : and ,title="fig:",width=151,height=151 ] if is very close to then attractors of the operator are splitted from the boundary of the simplex . however , the influence of the operator is still very high .therefore , we are expecting that the operator is non - ergodic and chaotic , whenever is very close to .if becomes to close to then we can see a different picture . in attractors, we are able to detect some * `` growing hairs '' * ( it is a just terminology ) . moreover ,if we shall continue to increasing then these `` hairs '' will start to rise and they eventually become straight . therefore , * is the first fluctuation point .* it is very interesting that the number of `` hairs '' is equal to 12 .it is again a question : why do we have 12 numbers of `` hears '' and why not 11 or 13 .[ wfigalpha01333 ] : and ,title="fig:",width=151,height=151 ] : and ,title="fig:",width=151,height=151 ] [ wfigalpha01390144015 ] : , , and ,title="fig:",width=151,height=151 ] : , , and ,title="fig:",width=151,height=151 ] : , , and ,title="fig:",width=151,height=151 ] from these pictures , we can find * another fluctuation point . * therefore , in order to have a transition from a chaotic behavior to the regular behavior , we should cross from * two fluctuation points and . *in this paper , we present a mathematical model of the mutation in the biological environment having 3 alleles .we have presented two types of mutations .we have shown that a mutation ( a mixing ) in the system can be considered as a convex combination of mendelian inheritances ( extreme or non - mixing systems ) .the first mutation is a convex combination of two li - yorke chaotic systems and the second mutation is a convex combination of li - yorke chaotic and regular systems . in the first case ,the first mutation can be considered as an evolution process between two different chaotic biological systems . in this case , we have shown that there is one fluctuation point ( transition point ) in order to change one chaotic system into another chaotic system . in the second case ,the second mutation can be considered as an evolution process between chaotic and regular biological systems . in this case , we need two fluctuation points ( transition points ) in order to change the chaotic system into the regular system . we hope that we can find a similar phenomenon in nature .ganikhodzhaev r.n . ,mukhamedov f.m ., rozikov u.a .quadratic stochastic operators : results and open problems . _ infinite dimensional analysis , quantum probability and related topics_. vol . * 14*. no . 2 ( 2011 ) , 279335 . | in this short communication , we shall explore a nonlinear discrete dynamical system that naturally occurs in population systems to describe a transmission of a trait from parents to their offspring . we consider a mendelian inheritance for a single gene with three alleles and assume that to form a new generation , each gene has a possibility to mutate , that is , to change into a gene of the other kind . we investigate the derived models . a numerical simulation assists us to get some clear picture about chaotic behaviors of such models . 0.3 cm _ mathematics subject classification 2010 _ : 92bxx , 37d45 , 39a33 . + _ key words _ : li - yorke chaos ; mutation ; regular transformation ; non - ergodic transformation ; quadratic stochastic operator . |
simulation of organic semiconductor devices , e.g. organic light emitting diodes ( oled ) , organic solar cells , etc . , has gained great interest in the past decade .accurate models often lack in precise parameters and their identification is time - consuming or expensive .one major problem is the uncertainty in the measurement data which leads to uncertain parameters . carrying out more experimentswould minimize this uncertainty in an expensive way .an alternative is to use oed in order to minimize the parameter uncertainty by planning new ( optimal ) experiments .new measurement data are received for which the parameter estimation yields parameters with minimal confidence intervals .we apply the concept of oed to the egdm , a special model for the mobility of electron transport in organic polymeric material .the chapters are arranged as follows : we give a brief overview of the model equations in sec . [ sec:2 ] and point out what the relevant quantities are . in sec .[ sec:3 ] , we explain the methodology of the optimum experimental design problem in more detail . after describing the equation solver methods we used , sec .[ sec:4 ] , numerical results of the optimization are presented in sec .[ sec:5 ] .a basic description of charge transport in semiconducting materials in the steady - state case is given by the coupled van roosbroeck system consisting of the continuity equation , also called drift - diffusion equation , and the poisson equation . given a domain , the state variables , i.e. the space dependent functions , are the electric charge density in ] which are scalar real valued functions defined on .we assume that and are twice differentiable on .pasveer et al . proposed the egdm to conjugated semi - conducting polymers , where the diffusion and the mobility depend on the state variables and hence on the space variable .furthermore they introduced another state , called quasi - fermi - energy , and a corresponding equation which couples and at every space point .we omit the -dependence in the following equations , only , and are space dependent .the model equations are : where , \\ g_2(\phi ) & = \exp\left\ { 0.44(\hat{{\sigma}}^{\frac32}-2.2)\right\ } \\ & \qquad \left[\sqrt{1 + 0.8\left ( \min\left\{\frac{\partial_x \phi}{{n_t}^{\frac13}{\sigma}},2\right\}\right)^2 } -1\right ] , \\ g_3(n , e_f ) & = \frac{n}{k_b { t } \frac{dn}{de_f}}. \label{eq : g3 } \end{aligned}\]]in these equations is the electric current density in ] , the permittivity in ] , and . in the anorganic case , the -factors , , would be constant .their organic model is yield by comparison with the solution of the master equation . on the boundary the following conditions are imposed : ^{-1}{{\,de } } , \\n(l ) & = \frac{{n_t}}{\sqrt{2\pi{\sigma}^2 } } \int \limits_{-\infty}^\infty e^{-\frac{e^2}{2{\sigma}^2 } } \frac{1}{1+\exp\left(\frac{e+{\varphi_2 } } { k_b{t}}\right)}{{\,de } } , \\\phi(0 ) & = 0 , \\ \phi({l } ) & = ev-\left ( \varphi_2 - \varphi_1 \right ) , \end{aligned}\]]where is the voltage in ] . on the entrance , , the energy barrier is lowered according to the theory of emtage and odwyer and scott and malliaras . for our computations we take the dimensionless form of the equations proposed by bonham and jarvis . for the later use we define where are parameters of unknown numerical value given by nature .they have to be identified by comparing a model response to experimental data . is the zero temperature mobility in ] and is the site density in ] and the temperature in ] with a constant mesh size .finite differences are applied to the spatial derivatives , i.e. with respect to .the so - called scharfetter - gummel scheme forces the function , defined in eq ., to be constant on each interval , denoted by , and provides an upwind stabilization , so that computation on coarse meshes is possible . on the interval , the scheme looks like the terms , stand for average values of the non - constant functions ,it is important that the averages are taken in an upwind conform way to prevent numerical oscillations .in this part , we follow the approaches of lohmann and krkel et al . with different choices of the controls and the voltage , defined in sec . [ sec:2 ] , we set up multiple experiments in which the current density is measured .let be the number of measurements we yield . in a parameter estimation ,the parameters are identified by fitting a model response , here , to experimental data , i.e. measurements .we assume the measurement error to be normally distributed with mean zero and covariance matrix . with the same experimental settings , i.e. equal controls ,a fit from a different realization of the measurement errors may result in very different parameter values .the covariance matrix of the parameters allows to analyze the quality of a parameter estimation .the assumed model for the standard deviations of the measurement errors is : ,\ ] ] where is the function value of corresponding to the -th measurement . for further notation, we assemble the values in the vector .if the confidence region of the parameters is approximated by assuming a linear propagation of the measurement errors , it can be parameterized by the covariance matrix defined by )(p-\mathbb e[p])^t ] \in \mathbb r^{n_p\times n_p},\ ] ] where is the number of parameters. from now on , we denote by , and the discrete counterparts of the state variables which are -dimensional vectors , without boundary values .they assemble the function values at the mesh points , , cf .sec.[sec:3 ] .we abbreviate the discrete solution of the system - dependent on parameters and controls by cf .table [ tab : gummel ] .we used the notation for the overall state dimension . in the followingwe denote the derivative of a function w.r.t . in the direction by and write accordingly the second derivatives of w.r.t . and in the directions and as .we combine several directions into a so - called seed matrix and define by and accordingly by with and .we define the matrix , \end{aligned}\ ] ] with the derivative of w.r.t . to the parameters with the -dimensional identity matrix as seed matrix .computing is much less expensive than computing the matrix product with the identity matrix .the covariance matrix in the unconstrained case can be computed by for given probability ] and for the permittivity in 12 & 12#1212_12%12[1][0] * * ( ) * * ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) , * * ( ) * * , ( ) ( ) , , * * , ( ) * * , ( ) * * , ( ) _ _ , no . ( , , ) * * , ( ) http://books.google.de/books?id=gypaygeacaaj[__ ] , mathematical modelling : theory and applications ( , ) ( ) , _ _ , http://www.koerkel.de[ph.d .thesis ] , , ( ) `` , '' ( , , ) | we apply optimum experimental design ( oed ) to organic semiconductors modeled by the extended gaussian disorder model ( egdm ) which was developed by pasveer et al . we present an extended gummel method to decouple the corresponding system of equations and use automatic differentiation to get derivatives with the required accuracy for oed . we show in two examples , whose parameters are taken from pasveer et al . and mensfoort and coehoorn that the linearized confidence regions of the parameters can be reduced significantly by applying oed resulting in new experiments with a different setup . |
regression models are one of the main tools of statistical modeling and supervised machine learning . in regression models ,_ responses _ are related to _ predictors _ by a probabilistic model that is a function of model parameters that we want to estimate from the data . the most common regression model , linear regression , assumes the response follows a gaussian distribution that can take values anywhere in the real numbershowever , different applications have a response of a different nature . for example, the response might take values only in the positive real numbers ( e.g. , the time between the arrivals of two buses ) , or in the non - negative integers ( e.g. , the number of thunders in a day , or the number and identity of the items chosen by someone from a catalog of items ) . in such cases , assuming that the response is gaussian might produce an inferior model to one obtained assuming a different distribution that better matches the nature of the response data . when the response takes on two values , it may be modeled as a bernoulli random variable . when it is a non - negative real number , it may be modeled as an exponential , erlang , or gamma distribution .when the response is a non - negative integer it may be modeled as a poisson , negative binomial , or binomial random variable .when the response is a category it may be modeled by a multinomial or categorical distribution .all these distributions , and more , are part of the so - called exponential family ( , , ) .the exponential family also includes distributions for random vectors with vector entries that are correlated , e.g. , as in the multivariate gaussian distribution , as well as with independent entries with different distribution types .the generalized linear model ( glm ) introduced in provides the theory to build regression models with static parameters where the response follows a distribution in the exponential family .the glm can then be seen as a generalization of linear regression .many applications of regression models , indeed those that rely on the glm , assume the parameters are static .this assumption is too restrictive when modeling time - series , which often exhibit trends , seasonal components , and local correlations .similarly , in many applications the model parameters represent variables that describe a portion of the state of the underlying system that can not be or are not directly measured , but with well - known relationships that describe their dynamics .the response can then be thought of as a noisy function of the system state and the predictors , and the goal of the regression model is to estimate the system state over time based on the observation time - series .situations where an underlying dynamical system is only observed through noisy measurements are often encountered in engineering applications , and the celebrated and widely used kalman filter , introduced in solves the corresponding regression model when the parameter dynamics ( linear with additive gaussian noise ) and the observation distribution ( gaussian with a mean that is a linear function of the state ) are simple enough .the kalman filter can be seen as an algorithm that efficiently estimates the mean and covariance matrix of the parameters in a linear regression model , where the parameters evolve in time through linear dynamics , based on the time - series of responses and predictors .furthermore , the kalman filter is an online algorithm that updates the parameter estimates in a relatively simple fashion when new observations arrive , without having to re - analyze all previous data .in addition , it is a second - order algorithm , in contrast to stochastic gradient descent ( e.g. , ) .so while the kalman filter takes more computation per observation than stochastic gradient descent , it can converge to an accurate estimate of the parameters with fewer observations , while allowing for much more flexibility in the modeling of the underlying parameter dynamics .lastly but importantly , because the kalman filter provides an estimate of the mean and covariance matrix , unlike other approaches that only focus on the mean , its results can be used to construct confidence intervals or samples of the parameters or of the model predictions these statistics are often necessary in several applications such as in contextual multi - armed bandits ( e.g. , ) .many generalizations to the kalman filter have now been developed , e.g. , to allow for non - linear state dynamics state dynamics , or for observations that are noisy and non - linear and/or non - gaussian functions of the state as in the extended kalman filter , e.g. , see for a good overview .more recently , there has been interest in merging the ideas from kalman filtering , namely modeling dynamics in the parameters of a regression model , with the flexibility to model observations from the wide class of probability distributions that the glm allows through its treatment of the exponential family ( , , , , , ) .it turns out that approximate algorithms that are very similar to the extended kalman filter can be derived for a wide range of choices for the observation distributions . introducing and describing some of these algorithms for a fairly general class of models , including the multivariate glm , is the main focus of this paper .the derivation we follow is novel and simpler ( e.g. , it does not invoke any kalman filter theory nor uses conjugacy ) . andthe form of the exponential family we use is slightly more general than that used in other references on these methods , because the nuissance parameter matrix included in our model is absent in other references that deal with multivariate responses .the second focus of this paper is to propose the application of these methods to the contextual multi - armed bandit problem ( , ) , and show the resulting performance through simulations .this application is novel to the best of our knowledge .it broadens the class of models that can be used to describe the rewards , separates the concept of a response and a reward , and allows for the explicit modeling of dynamics in the parameters that map the context to the rewards .section [ sec : setup ] introduces the class of models we study , and reviews the exponential family and the generalized linear model .section [ sec : algo ] describes the general online algorithm to estimate the mean and variance of the parameters , and specializes it to the multivariate exponential family , the univariate exponential family , and to several examples of commonly used distributions .we sketch the derivation of the algorithm in section [ sec : derivation ] .we apply the methods developed to the contextual multi - armed bandit problem in sections [ contextualbandits ] , and conclude in section [ conclusion ] .we assume all vector and matrix entries are real numbers .we denote vectors using boldface small caps , and assume them to be column vectors .we use small caps for scalars , and boldface large caps for matrices . if is a matrix , we denote its inverse by , and its transpose by .we assume we have received pairs of observations , for where is the -th _ response _ that we want to explain based on the -th _ predictor _ , for some positive integers , and .we denote by the information or history after observations , i.e. , is the set that includes the first responses and predictors . we postulate a model that relates the response to the predictor via a probability distribution where are the model parameters at the -th observation , with one parameter for each row in the predictor matrix is a -by- matrix and nuisance parameter that is assumed known and that we will often omit . as we will see later , the nuisance parameter plays the role of the covariance of the observations in linear regression , and is the identity matrix in many other cases of interest .we consider regression models where the probability of the response depends on the predictors and the parameters only through the -by- _ signal _ namely , models where rather than working with , we will typically work with its logarithm , denoted by which we assume to be a well - behaved function , particularly that the first and second derivatives with respect to exist and are finite .many of the commonly used regression models fit the model above , including univariate and multivariate linear regression with response logistic or binomial , categorical or multinomial , exponential , poisson , negative binomial , gamma , etc .our model also includes cases where the different entries in the response vector are conditionally independent given the signal , and follow any distribution that is a function of the signal or a subset of the signal entries .for example , we can have as many predictor vectors as entries in , i.e. , and have the -the response entry depend only on the -th signal entry so that , with being a different function for different response entries . because all the entries still depend on the parameters this setup allows for combining the information from different types of measurements that depend on the parameters to obtain more accurate parameter estimates .also , the predictor matrix may have a lot of structure , e.g. , to allow for some parameters to be shared across entries in the response vector , and others that are specific to subsets of the response .here we drop the time subscript of our vectors and matrices to avoid notational clutter , so , e.g. , becomes simply .all of the probability distributions of interest to model the response mentioned above , and more , can be re - arranged so that their log - likelihood has the following so - called _ natural exponential form _ here , is a -by-1 vector referred to as the natural parameter , with in its -th entry .it is a function of the signal in our models this will be made specific in the next section .crucially , the function is independent of and the function is independent of we assume that is twice differentiable with respect to its argument .the -by- nuisance parameter matrix is assumed symmetric and known .when is unknown , it can be estimated through several methods that are not online , e.g. , see chapter 13 in .it can be shown , e.g. , see appendix [ sec : meanvarnef ] , that the mean and covariance matrix of are given by = \mathbf{\phi } \frac{\partial b}{\partial \mathbf{\eta } } \label{eq : nefmean } \\ \mathbf{\sigma_y(\eta)}=&e[(\mathbf{y}- \mathbf{\mu(\eta)})(\mathbf{y}- \mathbf{\mu(\eta ) } ) ' ] = \mathbf{\phi } \frac{\partial^2 b}{\partial \mathbf{\eta}^2 } \mathbf{\phi } , \label{eq : nefvar}\end{aligned}\ ] ] where is a column vector with in its -th entry , and is the -by- matrix with in its -th row and -th column .most of the frequently encountered univariate and multivariate distributions have the exponential form above .in addition , a union of independent random vectors that are in the natural exponential family is also in the natural exponential family .e.g. , a random vector with entries distributed according to different members of the exponential family is still in the natural exponential family with a natural parameter given by the union of the natural parameters of its entries .this will allow us to estimate shared parameters in a regression model from multiple time series of a potentially different nature .we consider models where the response is distributed according to equation [ eq : nef ] , which is a function of .we use the glm to relate to the signal in the glm , introduced in , we assume that the signal is a function of the mean of the observation , in addition to assuming equation [ eq : nef ] for the observation .the glm then assumes that there is a known one - to - one mapping between the natural parameter in equation [ eq : nef ] and the signal so we can view the likelihood or any statistic of either as a function of or of .specifically , we have for known functions and , and where the so - called _ link function _ maps the mean of to the signal .the mean and covariance in equations [ eq : nefmean ] and [ eq : nefvar ] can then be seen to be either a function of the natural parameter or of the signal e.g. , here the function is the inverse of .we refer to as the _ response function _ ; it maps the signal to the mean of the response and plays a prominent role in our algorithms .any invertible function can be used as the link function in a glm , but one that often makes sense , and where the mathematics to learn the model simplifies , is the _canonical link _ that results in the signal being equal to the natural parameter , i.e. , in previous treatments of the glm in the context of dynamic parameters consider either a univariate response with the univariate case of equation [ eq : nef ] ( e.g. , see ) , or a multivariate response with equation [ eq : nef ] but with the nuissance parameter matrix equal to the identity ( e.g. , see chapter 2 in ) . in this senseour treatment is a slight generalization .we assume that the parameters evolve according to where is a zero - mean random vector with known covariance matrix we also assume that the _ noise _ is uncorrelated with itself over time , and uncorrelated with the observation parameters .the known vector drives the parameters through the appropriately sized and known matrix and is a known -by- square matrix .we also assume that at time zero the mean and variance of are known , i.e. , that the general setup above includes several special cases of interest , described next .when ( the identity matrix ) , , and we end up with the simple parameter dynamics which is the standard regression problem with static parameters .this becomes the glm when we also assume that the response is in the exponential family , with natural parameter that is a function of the signal . in this sense ,our model is a generalization of the glm where the parameters are allowed to vary over time .when and , we end up with the simple parameter dynamics .this allows the parameters to drift or diffuse over time in an unbiased ( zero - mean ) way , according to the noise .this model is appealing for a range of applications , e.g. , it could model the conversion rate of visitors to the netflix signup page as a function of their browser , country , day of week , time of day , etc .equation [ eq : pardynamics ] is central to the study of linear dynamical systems , control theory , and other related areas . specifically , it is core to the kalman filter , which also assumes a dynamical system that evolves according to equation [ eq : pardynamics ] , but with a response that is a linear function of the parameters ( the state in kalman filter parlance ) and additive gaussian noise .the kalman filter is an online algorithm that estimates the mean and variance of the system state from the noisy observations .so our setup is very related .the main difference is that we do not restrict our response to be gaussian , but rather a non - linear function of , itself a linear function of the state .the non - linearity and the noise characteristics of the observations follow from the choice of regression model made , e.g. , from the specific mapping between the signal and the response : choosing a gaussian distribution for the response with a mean equal to the signal yields the standard kalman filter . in this sense ,our model is a generalization of the standard kalman filter .a variant of the kalman filter known as the extended kalman filter deals with general non - linear observations , and has been applied to responses modeled through the exponential family ( , ) , yielding an algorithm that can be shown to be equivalent to ours . based on the assumptions above ,we obtain the following factorization of the joint probability function of predictors , responses and parameters . seek an algorithm to compute the mean and covariance matrix of the model parameters using all the observations we have at any given time .we want this algorithm to be online , i.e. , to perform a relatively simple update to the previous mean and covariance estimates when a new observation arrives , without having to re - analyze previous observations .let and denote the mean and covariance matrix of first , we initialize the mean and covariance of to the known values and .we proceed by induction : we assume we know that has parameter mean and covariance matrix and , and use them and the new observation to compute and through a two stepped process suggested by the following simple relation : equation [ eq : bayes ] relates to its which predicts based on all previous information up to but ignoring the observation at time , and the log - likelihood of the latest observation .we compute the mean and covariance of the prior via this equation is exact and does not require assuming any functional form for it follows fairly directly from equation [ eq : pardynamics ] , e.g. , see appendix [ sec : meanvarpredictionapp ] for a derivation of these and other equations in this section . when the parameter dynamics are non - linear , the mean and covariance of can be approximated through expressions identical to equations [ eq : meanpred ] and [ eq : varpred ] by suitably re - defining the matrices that appear in them , e.g. , by linearizing the parameter dynamics around or via numerical simulation as in the so - called unscented kalman filter ( , and ) . notethat when is the identity matrix , equation [ eq : varpred ] shows that the variance of the parameter estimates always increases in the prediction step , unless in addition , because the system input is deterministic , it does not contribute to the covariance matrix . when the predictor becomes known, we can use equations [ eq : meanpred ] and [ eq : varpred ] to determine the mean and covariance matrix of the signal given and : , the covariance matrix between the signal and the parameters is given by .the latter follows from the fact that signal is a linear function of the parameters , with as the weights .now we update the estimates from the prediction step to incorporate the new observation , obtaining the mean and covariance of the posterior we first compute the following matrices ^{-1 } + \mathbf{\omega_t } , \text { and } \label{eq : qmatrix } \mathbf{a_t } = & \mathbf{r_tx_tq_t^{-1}}.\end{aligned}\ ] ] here is the -by- hessian matrix of the log - likelihood , evaluated at the predicted value of the signal as we will see , in many models of interest , this matrix is the negative of the variance of evaluated at the predicted signal value the matrix then grows with the expected variance of the predicted signal response , but decreases when the expected variance of the response increases .we then compute the covariance via : computing the inverse of starting from equation [ eq : qmatrix ] can often be numerically unstable , e.g. , because the determinant of can be very small in magnitude .a more robust way to compute is via .\end{aligned}\ ] ] this expression follows directly from equations [ eq : varupdate2 ] and [ eq : qmatrix ] after applying the kailath variant of the woodbury identity ( e.g. , see ) .we finally compute the mean of the parameters by : our main algorithm proceeds by executing the prediction and estimation steps for each arriving observation , namely evaluating equations [ eq : meanpred ] , [ eq : varpred ] , [ eq : varupdate2 ] and [ eq : meanupdate ] with every new observation .equations [ eq : meanupdate ] and [ eq : varupdate2 ] are approximate , and follow from ( 1 ) a second - order taylor expansion of around , and ( 2 ) assuming the prior is gaussian with mean and covariance given by and .a sketch of the argument is described in section [ sec : derivation ] . because the two assumptions we make are exact in the case of linear regression with a gaussian prior for equations [ eq : meanupdate ] and [ eq : varupdate2 ] are exact in that case and correspond to the standard kalman filter equations .west _ et al . _( ) make the different approximation that the prior is conjugate to the likelihood obtaining a slightly different algorithm that has only been developed for the univariate response scenario .many regression models involve a scalar signal and a scalar response where the predictor is now simply a vector .this is the situation for the most commonly encountered regression models , such as linear , logistic , poisson or exponential .the matrices and then also become scalars , so the update equations [ eq : meanupdate ] and [ eq : varupdate2 ] simplify to where the predicted signal is .the result is very appealing because no matrix inverses need to be computed .here we consider models where the response is in the exponential family of equation [ eq : nef ] , and where the natural parameter is related to the signal via equations [ eq : glm1 ] and [ eq : glm2 ] .the gradient in these models can be shown to be given by so the gradient is always proportional to the error , the difference between the response and its mean according to the model at the given signal .the covariance matrix is in general a function of the signal but we drop that dependence in our notation below to reduce clutter . the hessian is then obtained by differentiating equation [ eq : gradientglm ] with respect to the signal once more , resulting in = \frac{\partial}{\partial \mathbf{\lambda_t}}\biggr [ \frac{\partial \mathbf{h(\lambda_t})'}{\partial \mathbf{\lambda_t } } \mathbf{\sigma_{y_t}^{-1 } } \biggr ] \big(\mathbf{y_t}-\mathbf{h(\lambda_t)}\big ) \nonumber \\ - \frac{\partial \mathbf{h(\lambda_t})'}{\partial \mathbf{\lambda_t } } \mathbf{\sigma_{y_t}^{-1 } } \frac{\partial \mathbf{h(\lambda_t})}{\partial \mathbf{\lambda_t}}. \label{eq : hessianglm } \end{aligned}\ ] ] evaluating equations [ eq : gradientglm ] and [ eq : hessianglm ] for a given choice of the likelihood and link function ( which determines the response function ) at the predicted signal , and plugging the resulting expressions into equations [ eq : varupdate2 ] and [ eq : meanupdate ] completes the algorithm . when the canonical link is used , the natural parameter is equal to the signal , so , and .equations [ eq : gradientglm ] and [ eq : hessianglm ] simplify to so the gradient is proportional to the error , as before , and the hessian is proportional to the negative of the covariance of evaluating the gradient and hessian above at the predicted signal , and plugging in the resulting expressions into equations [ eq : varupdate2 ] and [ eq : meanupdate ] yields the update equations \mathbf{x_t'r_t } \label{eq : meanupdatecanonicalnefvar2 } \\ \mathbf{m_t } = & \mathbf{a_t}+ \mathbf{c_tx_t } \mathbf{\phi_t^{-1 } } \big(\mathbf{y_t}-\mathbf{h(f_t)}\big ) , \text { where } \label{eq : meanupdatecanonicalnef}\\ \mathbf{e_t}= & \mathbf{\phi_t^{-1 } \sigma_{y_t}(f_t ) \phi_t^{-1}}.\end{aligned}\ ] ] equation [ eq : meanupdatecanonicalnefvar2 ] is the numerically stable analog of equation [ eq : varupdate2 ] that avoids inverses of potentially close - to - singular matrices . * multivariate linear regression * is one of many examples that falls in this class of models .there , and can be shown to be in the natural exponential family ( e.g. , see equation [ eq : gaussiannef ] in the appendix ) , with and and covariance matrix equal to which in this case is independent of the signal .so the equations above yield the standard kalman filter equations . other distributions in the natural exponential family , e.g. , the multinomial , have variances that are a function of the signal linear regression is the exception . *the univariate signal case * covers the majority of applications encountered in practice . herethe signal , the response , and the nuisance parameter are all scalars , and the predictor is a vector , so the update equations become : with being the variance of the response evaluated at the predicted signal . equation [ eq : varupdatecanonicalnefunivariate ] shows that the effect of the new observation is to reduce the covariance of the parameters by an amount proportional to , and a gain that gets smaller when there is more variance in the predicted signal , as captured by and larger when the response is expected to have a higher variance .many common regression models fall in this category .* univariate linear regression * with and this is already in natural exponential form with , and with playing the role of the natural parameter ( i.e. , the canonical link was used to map the mean of the response to the signal ) . substituting and in equations [ eq : varupdatecanonicalnefunivariate ] and [ eq : meanupdatecanonicalnefunivariate ]yields the univariate kalman filter . in * poisson regression * is a positive integer that follows a poisson distribution with mean .the likelihood is which is again in natural exponential form with as the natural parameter , and with .the variance of a poisson random variable is equal to its mean , so equations [ eq : varupdatecanonicalnefunivariate ] and [ eq : meanupdatecanonicalnefunivariate ] become in * exponential regression * is a non - negative real number that follows an exponential distribution with mean , so with mean and variance .note that here , so unlike other models here , the update in the mean is negatively proportional to the error , namely : in * logistic regression * the response is a bernoulli random variable .it takes on the value 1 with probability and the value 0 with probability .so is the response function , and its inverse is the link function .the likelihood becomes this is again in the natural exponential family with , and variance .the last equation above implies that is indeed the canonical link .so equations [ eq : varupdatecanonicalnefunivariate ] and [ eq : meanupdatecanonicalnefunivariate ] become start from equation [ eq : bayes ] , and view the likelihood as a function of the model parameters , namely .we then approximate about via the second - order taylor expansion : because the signal is given by we have that we also make the second approximation that .this approximation is what is needed to make the mathematics below work out , but could be justified in that the gaussian distribution is the continuous distribution that has maximum entropy given a mean and covariance matrix , and these are the only known statistics of .using the two approximations in equation [ eq : bayes ] results in being proportional to where equation [ eq : derivation2b ] follows from completing squares , e.g. , see appendix [ sec : completingsquares ] , and the proportional sign indicates that terms independent of were dropped .the result shows that under our approximations , to finish the argument , we substitute the expressions in equation [ eq : derivation1 ] into equation [ eq : derivation3 ] , and apply the woodbury matrix inversion formula to the expression for in equation [ eq : derivation3 ] to finally get the update equations [ eq : varupdate2 ] and [ eq : meanupdate ] .the models we discuss here can be and have been applied to a wide range of situations to model , analyze and forecast univariate and multivariate time series .e.g. , see chapter 14 in or for a range of examples . herewe apply the models discussed to the contextual multi - armed bandits scenario , where so far only univariate time series modeled through a linear or logistic regression have been considered . in the latter case ,the only treatment known to us approximates the covariance matrix as diagonal .the models we have discussed enable explore / exploit algorithms for contextual multi - armed bandit scenarios where the reward depends on a multivariate response vector distributed according to the exponential family , and where the true parameters of the different arms are dynamic .we hope this broadens the situations where contextual multi - armed bandit approaches can be helpful .the standard setup involves a player interacting with a slot machine with arms over multiple rounds .every time an arm is played a reward gets generated .different plays of the same arm generate different rewards , i.e. , the reward is a random variable .different arms have different and unknown reward distributions , which are a function of an observed context . at every time step, the player must use the observed context for each arm and all the history of the game to decide which arm to pull and then collect the reward .we seek algorithms that the player can use to decide what arm to play at every round in order to maximize the sum of the rewards received .these algorithms build statistical models to predict the reward for each arm based on the context , and decide how to balance exploring arms about which little is known with the exploitation of arms that have been explored enough to be predictable .the exploration / exploitation trade - off requires having a handle on the uncertainty of the predicted reward , so the models used need to predict at least the mean and variance of the reward for each arm .real applications such as personalized news recommendations or digital advertising often have tight temporal and computational constraints per round , so the methods to update the statistical models with every outcome need to be online .popular and useful model choices describe the univariate reward for each arm as a linear function of the context plus gaussian noise ( i.e. , through a linear regression , e.g. , see ) , or through a logistic regression ( ) . in the latter case ,the algorithm that updates the model based on new observations uses a diagonal approximation of the parameter covariance matrix . in all these references , model parameters are assumed static ( although their estimates change with every observation ) . in the non - contextual multi - armed bandit problem ,recent efforts have tried to generalize the distributions for the rewards to the univariate exponential family , and as far as we know this is the first treatment for the contextual case .we consider the following scenario .the parameters of all arms are compiled in the single parameter vector that , unlike other settings , is allowed to change over time according to equation [ eq : pardynamics ] .some entries in correspond to parameters for a single arm , and others are parameters shared across multiple or all arms .we describe the model parameters via where is the history of contexts and responses seen up to and including round . at the start of round , we observe the context matrix for each arm , and combine this information with our knowledge of to decide which arm to play .denote the arm played by , and its corresponding context matrix simply by to make it consistent with the notation in the rest of this paper . playing arm results in a response with a distribution in the ( possibly multivariate ) exponential family that depends on the context the relation between the response and the contextis given by the dynamic glm in section [ sec : dynglm ] , so the mean of is a function of the signal the response is used to update our estimates of the model parameters , according to the algorithm described in section [ sec : dynglm ] , to be used in round .we assume the reward received in round is a known deterministic function of the response , e.g. , a linear combination of the entries in .if we knew the actual model parameters , the optimal strategy to maximize the rewards collected throughout the game would be to play the arm with the highest average reward , i.e. , . ] i.e. , the difference between the means of the rewards of the optimal arm and the arm played given the context and the model parameters .unlike the more standard contextual setup , ours allow for the explicit modeling of parameter dynamics .it also broadens the choice of probability distribution to use for the response or reward to more naturally match the model choice to the nature and dimensionality of the reward data .e.g. , we can use a poisson regression when the reward is a positive integer , or have a response with multiple entries each with a different distribution , use all response entries to update our parameter estimates , and then define the reward to be a single entry in the response .a contextual multi - armed bandit algorithm uses the knowledge of the parameters at each round and the context to decide which arm to play .the widely used upper confidence bound ( ucb ) approach constructs an upper bound on the reward for each arm using the mean and covariance of the parameter estimates at every round , and selects as the arm with the highest upper bound , e.g. , see .another approach that has gained recent popularity ( , ) is the so - called thompson sampling introduced in , where arm is selected at round with a probability that it is optimal given the current distribution for the model parameters .it is only recently that asymptotic bounds for its performance have been developed both for the contextual ( , for the linear regression case only ) and the non - contextual ( ) case .the studies mentioned have found thompson sampling to perform at pair or better relative to other approaches , and to be more robust than ucb when there is a delay in observing the rewards .thompson sampling is also very easy to implement . in one variantwe sample a parameter value from the distribution and let . ] the latter approach is found in for the linear regression case to have a total regret that asymptotically scales with the number of model parameters rather than with its square as in the first variant , so we use the second variant in our simulations .our goal here is to demonstrate how our online regression models work in the contextual bandits case when the observations are multivariate and not gaussian , and when the model parameters are allowed to be dynamic .the goal is not to compare different contextual bandit algorithms , so we only focus on thompson sampling .the model we simulate is inspired by the problem of optimizing the netflix sign - up experience .each arm corresponds to a variant of the sign - up pages that a visitor experiences a combination of text displayed , supporting images , language chosen , etc .the context corresponds to the visitor s type of device and/or browser , the day of week , time of day , country where the request originated , etc .some of these predictors are continuous , such as the time of day , and others are categorical , such as the day of the week . the goal is maximizing signups by choosing the sign - up variant the is most likely to lead to a conversion given the context .we also observe other related outcomes associated to each visitor , such as the time spent on the sign - up experience , and whether they provide their email before signing up .we assume that these other observations are also related to the model parameters ( though possibly with different context vectors ) , and use them to improve our parameter estimates .so our response is multivariate , even if the reward is based on a single entry of the response vector .lastly , we want to let the model parameters drift over time , because we know that different aspects of the netflix product are relevant over time , e.g. , different videos in our streaming catalog will be the most compelling in a month than today .denote the response by ', ] we assume that the entries of the response are independent of each other conditioned on the signal , so the nuisance parameter matrix is diagonal and time - independent , with the vector ' ] as its diagonal .the context matrix for arm has one row for each model parameter entry and one column per response entry .some rows correspond to parameters shared by all arms , and others to parameters corresponding to a single arm . to construct simulate continuous and categorical predictors that we sample at every round .we let play the role of the continuous predictors , and sample each column from a zero - mean gaussian with covariance .the diagonal entries in are sampled independently from an exponential distribution with rate of 1 , and the off - diagonal entries all have a correlation of .we let the categorical predictor be a sample from a uniform categorical distribution with entries , i.e. , all entries in are zero except for one that is set to 1 .we also let be an indicator vector that specifies that arm is being evaluated .it has entries that are all zero except for its -th entry which is set to 1 .letting be a column vector with entries , all set to 1 , we define the context matrix for arm as here denotes the kronecker product between two vectors or matrices .the first rows of simply specify what arm is being evaluated , the next rows correspond to the continuous predictors , followed by rows for the categorical predictors .the next rows are the interaction terms between the continuous predictors and the arm ( only rows corresponding to the arm are non - zero ) , and the last rows are the interaction terms between the categorical predictor and the arm chosen ( all these rows are zero except one that is set to 1 ) .the number of rows and model parameters is then .we let and .note that without the interaction terms , the optimal arm would be independent of the context .we set the model parameter dynamics to where has diagonal entries that are independent exponential random variables with rate , and a correlation coefficient of 0.2 for its off - diagonal entries .we sample a different matrix at every round .we assume the first visitor arrives at , and start the game by sampling from a zero - mean gaussian with diagonal covariance matrix .the diagonal entries are independent samples from an exponential distribution with rate equal to 1 .we initialize the mean and covariance estimates of as and , where is the identity matrix . at round , starting from the mean and covariance estimates of the parameters , we compute the mean and covariance of the parameters .we then sample one value of the model parameters for each arm from the resulting prior distribution , and construct the context matrices for each arm .we use the context matrices and the parameter samples to choose ( which defines ) based on thompson sampling , we play and observe the response to obtain the round s reward , and update the parameter estimates to obtain and covariance and start the next round . , but using rather than to increase the diffusion rate of the model parameters.,title="fig:",width=264,height=264 ] , but using rather than to increase the diffusion rate of the model parameters.,title="fig:",width=264,height=264 ] figure [ fig : onerealization ] shows the result of one simulation with 2000 rounds and 10 arms labeled a through j. the left plot shows the optimal arm ( that with the highest predicted reward based on the actual parameters ) in blue and the arm played , selected via thompson sampling and the parameter estimates , in orange .it is evident from the spread of the orange dots across arms that , as expected , there was more exploration at the start of the game .the spread of the blue dots shows that the interaction terms between the context and the arm result in different arms being optimal in different rounds .the right plot shows the fraction of rounds when the optimal arm was not chosen through the first rounds for all values of in the simulation .it drops under 0.4 from close to 1.0 at the start .the blue line on the same figure shows the cumulative average regret rate per round , which is the sum of regrets per round divided by the number of rounds .the regret per round is simply , both evaluated using the actual parameters the yellow line shows the cumulative random regret that would have resulted from choosing any arm with equal probability , independently of the model parameter estimates or the context .we then repeated the full simulation 30 times and averaged the resulting timeseries across runs , for different scenarios with a different number of arms .figure [ fig : avgregrets ] shows the results . as expected , the probability of error , the regret and the random regret all increase with a larger number of arms .but the increased regret rate is quite mild , and continues to drop as more rounds are played .the benefit of the contextual bandit algorithm relative to uniformly at random choosing an arm is the difference between the random regret rate and the regret rate , and it increases nicely as the number of arms increases .we expect our approach to fall apart when the parameters drift so quickly over time that the information in the observations is not enough to keep the covariance of the model parameters from growing .we explored this by increasing the parameter diffusion rate by changing from to the results are shown in figure [ fig : avgregrets3 ] : although all metrics worsen , the regret rate still decreases nicely over time despite the large parameter fluctuations over time .we described a framework to easily obtain online algorithms that approximately estimate the mean and covariance matrix of the model parameters for a wide range of multivariate regression models where the model parameters change over time .although our derivation is novel , these algorithms have been well known within a subset of the time - series community for at least a decade , but to the best of the author s knowledge , are not well known within the broader machine learning and statistical community , where we think these tools can be helpful .we also propose using the algorithms in the contextual multi - armed bandit problem , where the approach here allows for dynamic parameters and a wider range of reward distributions .the methods we discuss here correspond to the so - called filtering problem in the kalman filter and related literature .there are other related algorithms that solve the so - called smoothing problem , i.e. , that estimate the parameters at any point in the past using all the observations .the latter have been useful for time - series analysis , but seem less obviously useful in machine learning applications ( though they are well known for the standard kalman filter , e.g. , see or ) , and so are not covered here .also , in situations where the parameter dynamics are non - linear , or where higher moments of the parameter estimates are desired , there are good alternative simulation - based approaches , e.g. , that rely on ideas from importance sampling and particle filters , that may be better choices than the methods described here .the best overviews of the full suite of methods that we know of are , and .we thank devesh parekh and dave hubbard for the initial discussions that triggered this research , stephen boyd and george c. verghese for the suggestion to relate this to kalman filters , and to justin basilico , roelof van zwol , and vijay bharadwaj for useful feedback on this paper .10 s. agrawal and n. goyal .further optimal regret bounds for thompson sampling . in _ proceedings of the sixteenth international conference on artificial intelligence and statistics _ , pages 99107 , 2013 .s. agrawal and n. goyal .thompson sampling for contextual bandits with linear payoffs . in _ proceedings of the 30th international conference on machine learning ( icml-13 ) _ , pages 127135 , 2013 .l. bottou .large - scale machine learning with stochastic gradient descent . in _ proceedings of compstat2010_ , pages 177186 .springer , 2010 .o. chapelle and l. li .an empirical evaluation of thompson sampling . in _ advances in neural information processing systems _ , pages 22492257 , 2011 .j. durbin and s. j. koopman .time series analysis of non - gaussian observations based on state space models from both classical and bayesian perspectives . ,62(1):356 , 2000 . j. durbin and s. j. koopman . .number 38 .oxford university press , 2012 .l. fahrmeir .posterior mode estimation by extended kalman filtering for multivariate dynamic generalized linear models ., 87(418):501509 , 1992 . j. harrison and m. west .springer , 1999 .r. e. kalman . a new approach to linear filtering and prediction problems ., 82(1):3545 , 1960 .b. m. klein . .phd thesis , citeseer , 2003 .n. korda , e. kaufmann , and r. munos .thompson sampling for 1-dimensional exponential family bandits . in _ advances in neural information processing systems _ ,pages 14481456 , 2013 .l. li , w. chu , j. langford , and r. e. schapire .a contextual - bandit approach to personalized news article recommendation . in _ proceedings of the 19th international conference on world wide web _ ,pages 661670 .acm , 2010 .t. minka . from hidden markov models to linear dynamical systems .technical report , citeseer , 1999 .c. n. morris . natural exponential families with quadratic variance functions . , pages 6580 , 1982 .c. n. morris .natural exponential families with quadratic variance functions : statistical theory . , pages 515529 , 1983 .k. murphy .filtering , smoothing and the junction tree algorithm . , 1999 .j. a. nelder and r. baker .generalized linear models . , 1972 .k. b. petersen , m. s. pedersen , et al .the matrix cookbook ., 7:15 , 2008 .d. simon . .john wiley & sons , 2006 . w. r. thompson . on the likelihood that one unknown probability exceeds another in view of the evidence of two samples ., pages 285294 , 1933 .m. j. wainwright and m. i. jordan .graphical models , exponential families , and variational inference . , 1(1 - 2):1305 , 2008 .m. west , p. j. harrison , and h. s. migon .dynamic generalized linear models and bayesian forecasting ., 80(389):7383 , 1985 .assuming that and using equation [ eq : pardynamics ] for the parameter dynamics , we have that = & e\big [ \mathbf{g_t\theta_{t-1 } } + \mathbf{b_t u_{t-1}}+\mathbf{\omega_t}|d_{t-1}\big ] \nonumber \\ = & \mathbf{g_t m_{t-1 } } + \mathbf{b_t u_{t-1 } } = \mathbf{a_t},\end{aligned}\ ] ] resulting in equation [ eq : meanpred ] .here we used the assumption that = \mathbf{0} ] of the signal are derived as follows : = & \mathbf{x_t'}e[\mathbf{\theta_t}|d_{t-1 } ] = \mathbf{x_t'a_t}=\mathbf{f_t}. \nonumber \\\mathbf{\omega_t}= & e\big[\mathbf{x_t'}(\mathbf{\theta_t - a_t})(\mathbf{\theta_t - a_t } ) ' \mathbf{x_t } | d_{t-1},\mathbf{x_t } \big ] \nonumber \\ = & \mathbf{x_t'r_tx_t } .\end{aligned}\ ] ] so .lastly , the covariance ] where for any matrix is a column vector resulting from stacking all the columns in , and re - arrange equation [ eq : gaussiannef ] to have the natural exponential form in equation [ eq : ef ] , now with natural parameter being both a function of and with is proportional to : }_{\mathbf{\eta}'}\underbrace{[\mathbf{y ' } \vec(\mathbf{yy'})]'}_{\mathbf{t(y ) } } - \underbrace { \big(\frac{1}{2 } \mathbf{\mu}'\mathbf{\sigma^{-1 } } \mathbf{\mu } + \frac{1}{2}\log(|\sigma|)\big)}_{b(\mathbf{\eta } ) } , \label{eq : gaussiannef2}\end{aligned}\ ] ] so in this case and the natural parameter becomes a function of as well as of the mean . the function $ ] can be shown to equal . the last equality follows because the integrand in the first term of the second equation is also a probability distribution in the natural exponential family with parameter , so the integral equals 1 . taking the first derivative of and evaluating it at yields = \frac{\partial b(\mathbf{\eta , \phi})}{\partial \mathbf{\eta}}. \label{eq : mgfmean}\end{aligned}\ ] ] similarly , the second derivative of at yields \mathbf{\phi^{-1 } } = \frac{\partial b(\mathbf{\eta , \phi})}{\partial \mathbf{\eta } } \biggr ( \frac{\partial b(\mathbf{\eta , \phi})}{\partial \mathbf{\eta}}\biggr ) ' + \frac{\partial^2 b(\mathbf{\eta , \phi})}{\partial \mathbf{\eta^2 } } , \text { so } \nonumber \\ \mathbf{\phi } \frac{\partial^2 b(\mathbf{\eta , \phi})}{\partial \mathbf{\eta^2 } } \mathbf{\phi } = e\biggr[\biggr(\mathbf{t(y)-e[\mathbf{t(y)}]}\biggr)\biggr(\mathbf{t(y)-e[\mathbf{t(y)}]}\biggr)'\biggr ] .\label{eq : mgfvar}\end{aligned}\ ] ] setting above yields equations [ eq : nefmean ] and [ eq : nefvar ] .the derivation of equation [ eq : derivation2 ] required turning the expression where ( which is symmetric and positive definite ) and into the so - called perfect square expression for .we have so we need this implies that | we study the problem of estimating the parameters of a regression model from a set of observations , each consisting of a response and a predictor . the response is assumed to be related to the predictor via a regression model of unknown parameters . often , in such models the parameters to be estimated are assumed to be constant . here we consider the more general scenario where the parameters are allowed to evolve over time , a more natural assumption for many applications . we model these dynamics via a linear update equation with additive noise that is often used in a wide range of engineering applications , particularly in the well - known and widely used kalman filter ( where the system state it seeks to estimate maps to the parameter values here ) . we derive an approximate algorithm to estimate both the mean and the variance of the parameter estimates in an online fashion for a generic regression model . this algorithm turns out to be equivalent to the extended kalman filter . we specialize our algorithm to the multivariate exponential family distribution to obtain a generalization of the generalized linear model ( glm ) . because the common regression models encountered in practice such as logistic , exponential and multinomial all have observations modeled through an exponential family distribution , our results are used to easily obtain algorithms for online mean and variance parameter estimation for all these regression models in the context of time - dependent parameters . lastly , we propose to use these algorithms in the contextual multi - armed bandit scenario , where so far model parameters are assumed static and observations univariate and gaussian or bernoulli . both of these restrictions can be relaxed using the algorithms described here , which we combine with thompson sampling to show the resulting performance on a simulation . |
concurrent stochastic games are played by two players on a finite state space for an infinite number of rounds . in every round, the two players simultaneously and independently choose moves ( or actions ) , and the current state and the two chosen moves determine a probability distribution over the successor states .the outcome of the game ( or a _ play _ ) is an infinite sequence of states .these games were introduced by shapley , and have been one of the most fundamental and well studied game models in stochastic graph games .we consider -regular objectives specified as parity objectives ; that is , given an -regular set of infinite state sequences , player 1 wins if the outcome of the game lies in . otherwise , player 2 wins , i.e. , the game is zero - sum .the class of concurrent stochastic games subsumes many other important classes of games as sub - classes : ( 1 ) _ turn - based stochastic _ games , where in every round only one player chooses moves ( i.e. , the players make moves in turns ) ; and ( 2 ) _ markov decision processes ( mdps ) _ ( one - player stochastic games ) .concurrent games and the sub - classes provide a rich framework to model various classes of dynamic reactive systems , and -regular objectives provide a robust specification language to express all commonly used properties in verification , and all -regular objectives can be expressed as parity objectives .thus concurrent games with parity objectives provide the mathematical framework to study many important problems in the synthesis and verification of reactive systems ( see also ) .the player-1 _ value _ of the game at a state is the limit probability with which player 1 can ensure that the outcome of the game lies in ; that is , the value is the maximal probability with which player 1 can guarantee against all strategies of player 2 .symmetrically , the player-2 _ value _ is the limit probability with which player 2 can ensure that the outcome of the game lies outside .the problem of studying the computational complexity of mdps , turn - based stochastic games , and concurrent games with parity objectives has received a lot of attention in literature .markov decision processes with -regular objectives have been studied in and the results show existence of pure ( deterministic ) memoryless ( stationary ) optimal strategies for parity objectives and the problem of value computation is achievable in polynomial time .turn - based stochastic games with the special case of reachability objectives have been studied in and existence of pure memoryless optimal strategies has been established and the decision problem of whether the value at a state is at least a given rational value lies in np conp . the existence of pure memoryless optimal strategies for turn - based stochastic games with parity objectives was established in , and again the decision problem lies in np conp .concurrent parity games have been studied in and for concurrent parity games optimal strategies need not exist , and -optimal strategies ( for ) require both infinite memory and randomization in general , and the decision problem can be solved in pspace .almost all results in the literature consider the problem of computing values and optimal strategies when the game model is given precisely along with the objective .however , it is often unrealistic to know the precise probabilities of transition which are only estimated through observation .since the transition probabilities are not known precisely , an extremely important question is how robust is the analysis of concurrent games and its sub - classes with parity objectives with respect to small changes in the transition probabilities .this question has been largely ignored in the study of concurrent and turn - based stochastic parity games . in this paperwe study the following problems related to continuity and robustness of values : ( 1 ) _ ( continuity of values ) : _ under what conditions can continuity of the value function be proved for concurrent parity games ; ( 2 ) _ ( robustness of values ) : _ can quantitative bounds be obtained on the difference of the value function in terms of the difference of the transition probabilities ; and ( 3 ) _ ( robustness of optimal strategies ) : _ do optimal strategies of a game remain -optimal , for , if the transition probabilities are slightly changed . _ our contributions ._ our contributions are as follows : we consider _ structurally equivalent _ game structures , where the supports of the transition probabilities are the same , but the precise transition probabilities may differ .we show the following results for structurally equivalent concurrent parity games : _ quantitative bound ._ we present a quantitative bound on the difference of the value functions of two structurally equivalent game structures in terms of the difference of the transition probabilities .we show when the difference in the transition probabilities are small , our bound is asymptotically optimal .our example to show the matching lower bound is on a markov chain , and thus our result shows that the bound for a markov chain can be generalized to concurrent games. _ value continuity ._ we show _ value continuity _ for structurally equivalent concurrent parity games , i.e. , as the difference in the transition probabilities goes to 0 , the difference in value functions also goes to 0 .we then show that the structural equivalence assumption is necessary : we show a family of markov chains ( that are not structurally equivalent ) where the difference of the transition probabilities goes to 0 , but the difference in the value functions is 1 .it follows that the structural equivalence assumption is both necessary ( even for markov chains ) and sufficient ( even for concurrent games ) .it follows from above that our results are both optimal ( quantitative bounds ) as well as tight ( assumption both necessary and sufficient ) .our result for concurrent parity games is also a significant quantitative generalization of a result for concurrent parity games of which shows that the set of states with value 1 remains same if the games are structurally equivalent .we also argue that the structural equivalence assumption is not unrealistic in many cases : a reactive system consists of many state variables , and given a state ( valuation of variables ) it is typically known which variables are possibly updated , and what is unknown is the precise transition probabilities ( which are estimated by observation ) .thus the system that is obtained for analysis is structurally equivalent to the underlying original system and it only differs in precise transition probabilities . for turn - based stochastic parity games the value continuity and the quantitative bounds are same as for concurrent games .we also prove a stronger result for structurally equivalent turn - based stochastic games that shows that along with continuity of the value function , there is also robustness property for pure memoryless optimal strategies .more precisely , for all , we present a bound , such that any pure memoryless optimal strategy in a turn - based stochastic parity game is an -optimal strategy in every structurally equivalent turn - based stochastic game such that the transition probabilities differ by at most .our result has deep significance as it allows the rich literature of work on turn - based stochastic games to carry over robustly for structurally equivalent turn - based stochastic games .as argued before the model of turn - based stochastic game obtained to analyze may differ slightly in precise transition probabilities , and our results shows that the analysis on the slightly imprecise model using the classical results carry over to the underlying original system with small error bounds .our results are obtained as follows .the result of shows that the value function for concurrent parity games can be characterized as the limit of the value function of concurrent multi - discounted games ( concurrent discounted games with different discount factors associated with every state ) .there exists bound on difference on value function of discounted games , however , the bound depends on the discount factor , and in the limit gives trivial bounds ( and in general this approach does not work as value continuity can not be proven in general and the structural equivalence assumption is necessary ) .we use a classical result on markov chains by friedlin and wentzell and generalize a result of solan from markov chains with single discount to markov chains with multi - discounted objective to obtain a bound that is independent of the discount factor for structurally equivalent games .then the bound also applies when we take the limit of the discount factors , and gives us the desired bound .our paper is organized as follows : in section [ sec : defn ] we present the basic definitions , in section [ sec : mc ] we consider markov chains with multi - discounted and parity objectives ; in section [ sec : games ] ( subsection [ sec : tb ] ) we prove the results related to turn - based stochastic games ( item ( 2 ) of our contributions ) and finally in subsection [ sec : conc ] we present the quantitative bound and value continuity for concurrent games along with the two examples to illustrate the asymptotic optimality of the bound and the structural equivalence assumption is necessary .detailed proofs are presented in the appendix .in this section we define game structures , strategies , objectives , values and present other preliminary definitions. * probability distributions . * for a finite set , a _ probability distribution _ on is a function ] .let ] the set of all game structures ( resp .markov chains ) that are structurally equivalent to .* ratio and absolute distances . * given two game structures and , the _ absolute distance _ of the game structures is maximum absolute difference in the transition probabilities .formally , .the absolute distance for two markov chains and is .we now define the ratio distance between two structurally equivalent game structures and markov chains .let and be two structurally equivalent game structures .the _ ratio _distance is defined on the ratio of the transition probabilities .formally , the ratio distance between two structurally equivalent markov chains and is .* remarks about the distance functions .* we first remark that the ratio distance is not necessarily a metric .consider the markov chain with state space and let .for consider the transition functions such that , for all .let be the markov chain with transition function .then we have and , and hence .the above example is from .also note that is only defined for structurally equivalent game structures , and without the assumption is .we also remark that the absolute distance that measures the difference in the transition probabilities is the most intuitive measure for the difference of two game structures .[ prop_dist ] let be a game structure ( resp .markov chain ) such that the minimum positive transition probability is . for all game structures ( resp .markov chains ) \!]_\equiv}} ] , i.e. , it is the expected mean - discounted time for when the starting state is , where the expectation measure is defined by the markov chain with transition function .we now present a lemma that shows the value function for multi - discounted markov chains can be expressed as ratio of two polynomials ( the result is obtained as a simple extension of a result of solan ) . [ lemm : ratio ] for markov chains defined on state space , for all initial states , for all states , for all discount vectors , there exists two polynomials and in variables , where such that the following conditions hold : 1 .the polynomials have degree at most with non - negative coefficients ; and 2 .for all transition functions over we have , where , and denote the values of the function and such that all the variables is instantiated with values as given by the transition function . _( sketch ) ._ we present a sketch of the proof ( details in appendix ) .fix a discount vector .we construct a markov chain as follows : , where is a copy of states of ( and for a state we denote its corresponding copy as ) ; and the transition function is defined below 1 . for all ( i.e. , all copy states are absorbing ) ; 2 . for have i.e. , it goes to the copy with probability , it follows the transition in the original copy with probabilities multiplied by .we first show that for all and we have ; i.e. , the expected mean - discounted time in when the original markov chain starts in is the probability in the markov chain that the first hitting state out of is the copy of the state .the claim is easy to verify as both and are the unique solution of the following system of linear equations : for all we have we now claim that for all .this follows since for all we have and since we have .now we observe that we can apply theorem [ thrm - fw ] on the markov chain with as the set of states of theorem [ thrm - fw ] , and obtain the result . indeed the terms and are independent of , and the two products of equation ( [ eq1 ] ) each contains at most terms of the form for .thus the desired result follows .[ lemm : poly ] let be a polynomial function with non - negative coefficients of degree at most .let and be two non - negative vectors such that for all we have .then we have . [ lemm_mc_multi ]let and be two structurally equivalent markov chains .for all non - negative reward functions such that the reward function is bounded by 1 , for all discount vectors , for all we have ; i.e. , the absolute difference of the value functions for the multi - discounted objective is bounded by .the proof of lemma [ lemm_mc_multi ] uses lemma [ lemm : ratio ] and lemma [ lemm : poly ] and is presented in the appendix .[ thrm_mc_multi ] let and be two structurally equivalent markov chains .let be the minimum positive transition probability in .the following assertions hold : 1 . forall non - negative reward functions such that the reward function is bounded by 1 , for all discount vectors , for all we have & \leq & \displaystyle ( 1+{\varepsilon}_a)^{2\cdot |s|}-1 \end{array}\ ] ] 2 . for all parity objectives and for all we have where and .the first part follows from lemma [ lemm_mc_multi ] and proposition [ prop_dist ] .the second part follows from part 1 , the fact the value function for parity objectives is obtained as the limit of multi - discounted objectives ( theorem [ thrm_lit2 ] ) , and the fact the bound for part 1 is independent of the discount factors ( hence independent of taking the limit ) .* remark on structural assumption in the proof .* the result of the previous theorem depends on the structural equivalence assumption in two crucial ways .they are as follows : ( 1 ) proposition [ prop_dist ] that establishes the relation of and only holds with the assumption of structural equivalence ; and ( 2 ) without the structural equivalence assumption is , and hence without the assumption the bound of the previous theorem is , which is a trivial bound .we will later show ( in example [ examp1 ] ) that the structural equivalence assumption is necessary .in this section we show two results : first we show robustness of strategies and present quantitative bounds on value functions for turn - based stochastic games and then we show continuity for concurrent parity games . in this sectionwe present quantitative bounds for robustness of optimal strategies in structurally equivalent turn - based stochastic games . for every , we present a bound , such that if the distance of the structurally equivalent turn - based stochastic games differs by at most , then any pure memoryless optimal strategy in one game is -optimal in the other .the result is first shown for mdps and then extended to turn - based stochastic games ( both proofs are in the appendix ) . [ thrm_tb_stochastic ]let be a turn - based stochastic game such that the minimum positive transition probability is .the following assertions hold : 1 . for all turn - based stochastic games \!]_\equiv}}] such that , for all parity objectives , every pure memoryless optimal strategy in is an -optimal strategy in . in this sectionwe show value continuity for structurally equivalent concurrent parity games , and show with an example on markov chains that the continuity property breaks without the structural equivalence assumption . finally with an example on markov chains we show the our quantitative bounds are asymptotically optimal for small distance values .we start with a lemma for mdps .[ lemm_mdp_multi ] let and be two structurally equivalent mdps .let be the minimum positive transition probability in .for all non - negative reward functions such that the reward function is bounded by 1 , for all discount vectors , for all we have & \leq & \displaystyle \bigg(1+\frac{{\mathit{dist}_a}(g_1,g_2)}{\eta}\bigg)^{2\cdot |s|}-1 \end{array}\ ] ] the main idea of the proof of the above lemma is to fix a pure memoryless optimal strategy and then use the results for markov chains . using the same proof idea , along with randomized memoryless optimal strategies for concurrent game structures and the above lemma, we obtain the following lemma ( the result is identical to the previous lemma , but for concurrent game structures instead of mdps ) .[ lemm_conc_multi ] let and be two structurally equivalent concurrent game structures .let be the minimum positive transition probability in .for all non - negative reward functions such that the reward function is bounded by 1 , for all discount vectors , for all we have & \leq & \displaystyle \bigg(1+\frac{{\mathit{dist}_a}(g_1,g_2)}{\eta}\bigg)^{2\cdot |s|}-1 \end{array}\ ] ] we now present the main theorem that depends on lemma [ lemm_conc_multi ] . [ thrm_conc_multi ] let and be two structurally equivalent concurrent game structures .let be the minimum positive transition probability in .for all parity objectives and for all we have & \leq & \displaystyle \bigg(1+\frac{{\mathit{dist}_a}(g_1,g_2)}{\eta}\bigg)^{2\cdot |s|}-1 \end{array}\ ] ] the result follows from theorem [ thrm_lit2 ] , lemma [ lemm_conc_multi ] and the fact that the bound of lemma [ lemm_conc_multi ] are independent of the discount factors and hence independent of taking the limits . in the following theoremwe show that for structurally equivalent game structures , for all parity objectives , the value function is continuous in the absolute distance between the game structures .we have already remarked ( after theorem [ thrm_mc_multi ] ) that the structural equivalence assumption is required in our proofs , and we show in example [ examp1 ] that this assumption is necessary .[ thrm_val_con ] for all concurrent game structures , for all parity objectives \!]_\equiv } } , { \mathit{dist}_a}(g_1,g_2 ) \leq { \varepsilon } } \ \\sup_{s\in s }let be the minimum positive transition probability in .by theorem [ thrm_conc_multi ] we have \!]_\equiv } } , { \mathit{dist}_a}(g_1,g_2 ) \leq { \varepsilon } } \\sup_{s\in s } \lim_{{\varepsilon}\to 0 } \bigg(1+\frac{{\varepsilon}}{\eta}\bigg)^{2\cdot |s|}-1\ ] ] the above limit equals to 0 , and the desired result follows . [ examp1 ] in this example we show that in theorem [ thrm_val_con ] the structural equivalence assumption is necessary , and thereby show that the result is tight .we show an markov chain and a family of markov chains , for , such that ( but is not structurally equivalent to ) with a parity objective and we have .the markov chains and are defined over the state space , and in both states have self - loops with probability 1 , and in the self - loop at has probability and the transition probability from to is ( see fig [ figure : buchi - lim ] in appendix ) . clearly , .the parity objective requires to visit the state infinitely often ( i.e. , assign priority 2 to and priority 1 to ) .then we have as the state is never left , whereas in the state is the only closed recurrent set of the markov chain and hence reached with probability 1 from .hence .it follows that .we now show that our quantitative bound for the value function difference is asymptotically optimal for small distances .let us denote the absolute distance as , and the quantitative bound we obtain in theorem [ thrm_conc_multi ] is , and if is small , then we obtain the following approximate bound we now illustrate with an example ( on structurally equivalent markov chains ) where the difference in the value function is , for small . consider the markov chain defined on state space as follows : states and are absorbing ( states with self - loops of probability 1 ) and for a state we have ; and ; i.e. , we have a markov chain defined on a line from to ( with and absorbing states ) and the chain moves towards with probability and towards with probability ( see fig [ figure : asym ] with complete details in appendix ) .our goal is to estimate the probability to reach the state , and let denote the probability to reach from the starting state .we show ( details in appendix ) that if , then and for , such that is close to 0 , we have .observe that the markov chain obtained for and are structurally equivalent .thus the desired result follows .in this work we studied the robustness and continuity property of concurrent and turn - based stochastic parity games with respect to small imprecision in the transition probabilities .we presented ( i ) quantitative bounds on difference of the value functions and proved value continuity for concurrent parity games under the structural equivalence assumption , and ( ii ) showed robustness of all pure memoryless optimal strategies for structurally equivalent turn - based stochastic parity games .we also showed that the structural equivalence assumption is necessary and that our quantitative bounds are asymptotically optimal for small imprecision .we believe our results will find applications in robustness analysis of various other classes of stochastic games . 10 m. abadi , l. lamport , and p. wolper .realizable and unrealizable specifications of reactive systems . in _ icalp89 _ , lncs 372 , pages 117 .springer , 1989 .r. alur , t.a .henzinger , and o. kupferman .alternating - time temporal logic ., 49:672713 , 2002 .k. chatterjee , l. de alfaro , and t.a .the complexity of quantitative concurrent parity games . in _ soda06 _ , pages 678687 .acm - siam , 2004 .k. chatterjee , t.a .henzinger , and m. jurdziski .games with secure equilibria . in _ lics04 _ , pages 160169 .ieee , 2004 .k. chatterjee , m. jurdziski , and t.a .henzinger . quantitative stochastic parity games . in _ soda04 _ , pages 121130 .siam , 2004 .a. church .logic , arithmetic , and automata . in _ proceedings of the international congress of mathematicians _ , pages 2335 .institut mittag - leffler , 1962 .a. condon .the complexity of stochastic games ., 96(2):203224 , 1992 . c. courcoubetis and m. yannakakis .the complexity of probabilistic verification ., 42(4):857907 , 1995 .l. de alfaro . .phd thesis , stanford university , 1997 .l. de alfaro and t.a .concurrent omega - regular games . in _lics00 _ , pages 141154 .ieee , 2000 .l. de alfaro , t.a .henzinger , and r. majumdar .discounting the future in systems theory . in _ icalp03 _ , lncs 2719 , pages 10221037 .springer , 2003 .l. de alfaro and r. majumdar . quantitative solution of omega - regular games . in _ stoc01 _ , pages 675683 .acm press , 2001 . c. derman . .academic press , 1970 .d.l . dill . .the mit press , 1989 .k. etessami and m. yannakakis .recursive concurrent stochastic games . in _icalp06 ( 2 ) _ , lncs 4052 , springer , pages 324335 , 2006 .j. filar and k. vrieze . .springer - verlag , 1997 .m. i. friedlin and a. d. wentzell . .springer , 1984 .h. gimbert and w. zielonka .discounting infinite games but how and why ?, 119(1):39 , 2005 . a. kechris .springer , 1995 .the determinacy of blackwell games ., 63(4):15651581 , 1998 .a. pnueli and r. rosner . on the synthesis of a reactive module . in _popl89 _ , pages 179190 .acm press , 1989 .raghavan and j.a .algorithms for stochastic games a survey ., 35:437472 , 1991 .p.j . ramadge and w.m .supervisory control of a class of discrete - event processes ., 25(1):206230 , 1987 .stochastic games . , 39:10951100 , 1953 .e. solan .continuity of the value of competitive markov decision processes ., 16:831845 , 2003 .w. thomas .automata on infinite objects . in j.van leeuwen , editor , _ handbook of theoretical computer science _ ,volume b , pages 133191 .elsevier , 1990 .. automatic verification of probabilistic concurrent finite - state systems . in _ focs85 _ , pages 327338 .ieee computer society press , 1985 .perfect - information stochastic parity games . in _ fossacs04 _ , pages 499513 .lncs 2987 , springer , 2004 ._ ( of proposition [ prop_dist ] ) . _ consider , , and . then we have the following two inequalities : we consider , and the argument for is symmetric .we consider and if , then , and otherwise we have the following inequality : it follows that in both cases we have .the desired result follows from the above inequalities .we now present the proof of lemma [ lemm : ratio ] which is obtained as a simple extension of a result of solan . _( of lemma [ lemm : ratio ] ) ._ fix a discount vector .we construct a markov chain as follows : , where is a copy of states of ( and for a state we denote its corresponding copy as ) ; and the transition function is defined below 1 . for all ( i.e. , all copy states are absorbing ) ; 2 . for have i.e. , it goes to the copy with probability , it follows the transition in the original copy with probabilities multiplied by .we first show that for all and we have i.e. , the expected mean - discounted time in when the original markov chain starts in is the probability in the markov chain that the first hitting state out of is the copy of the state .the claim is easy to verify as both and are the solutions of the following system of linear equations the fact that is the solution of the above equation follows from the results of discounted reward markov chains ( detailed proofs with uniform discount factor for mdps is available in ( e.g. , equation 2.15 of ) , and specialization to markov chains and generalization to discount factor attached to every state is straightforward ) .the fact that is the solution of the above equation follows from the results of characterization of hitting time for transient markov chains ( see for details ) .also the above system of linear equations has a unique solution .the uniqueness of the solution follows from the fact that this is a contraction mapping , and the proof is as follows : let and be two solutions of the system .we chose such that , i.e. , is a state that maximizes the difference of the two solutions .let .as and are solutions of the above system we have by the triangle inequality & \leq & \displaystyle \eta \cdot \sum_{t\in s } { \lambda}(t ) \cdot { \delta}(s_0)(t ) \leq \eta \cdot \max_{t\in s } { \lambda}(t ) \cdot \sum_{t \in s}{\delta}(s_0)(t ) . \end{array}\ ] ] since , it follows that .since it follows that we must have and hence the two solutions must coincide .we now claim that for all .this follows since for all we have and since we have .now we observe that we can apply theorem [ thrm - fw ] on the markov chain with as the set of states of theorem [ thrm - fw ] , and obtain the result . indeed the terms and are independent of , and the two products of equation ( [ eq1 ] ) each contains at most terms of the form for .thus the desired result follows .we now illustrate the construction of lemma [ lemm : ratio ] with the aid of some examples .consider the markov chain with states and such that is absorbing and the transition from to has probability 1 , and let the discount factor be for all states .the markov chain along with is shown in fig .[ fig : illus_1 ] .if we start at , the mean - discounted time at is given by in the markov chain , the probability to reach from is , and once is reached the exit state is with probability 1 . hence the probability to exit through state is also .( 85,25)(0,0 ) ( n0)(0,18) ( n1)(20,18) ( n2)(70,18) ( n3)(90,18) ( n4)(70,0) ( n5)(90,0) ( n1) ( n3) ( n4) ( n5) ( n0,n1) ( n2,n3) ( n2,n4) ( n3,n5) ( 85,25)(0,0 ) ( n0)(0,18) ( n1)(20,18) ( n2)(70,18) ( n3)(90,18) ( n4)(70,0) ( n5)(90,0) ( n4) ( n5) ( n0,n1) ( n1,n0) ( n2,n3) ( n3,n2) ( n2,n4) ( n3,n5) we now consider another example to illustrate further . consider the markov chain and in fig [ fig : illus_2 ] , where in it alternates between state and , and the discount factor is . if we start at state , the mean - discounted time at is given by the probability to exit through in in 2-steps is , in 4-steps is and so on .hence the probability to exit through in is the above examples show how the mean - discounted time in and the exit probability in has the same value . _ ( of lemma [ lemm : poly ] ) ._ we first write as follows : where , for all we have , , and for each . by the hypothesis of the lemma , forall we have since every , multiplying the above inequalities by and summing over yields the desired result . _( of lemma [ lemm_mc_multi ] ) ._ we first observe that for a markov chain we have , i.e. , the value function for a state is obtained as the sum of the product of mean - discounted time of states and the rewards with as the starting state .hence by lemma [ lemm : poly ] it follows that can be expressed as a ratio of two polynomials of degree at most over variables .hence we have let . by definition for all , if , then we have both and are between and .it follows from lemma [ lemm : poly ] , with that thus we have hence we have we consider the case when , and the other case argument is symmetric .we also assume without loss of generality that .otherwise if , since rewards are non - negative , it follows that no state with positive reward is reachable from both in and ( because if they are reachable , then they are reachable with positive probability and then the value is positive ) , and hence and the result of the lemma follows trivially .since we assume that and , we have & \leq & { { \mathsf{val}}}(g_2,{\mathsf{mdt}}({\vec{\lambda}},r))(s ) \cdot \big((1 + { \varepsilon})^{2\cdot|s|}-1\big ) \end{array}\ ] ] since the reward function is bounded by 1 , it follows that , and hence we have the desired result follows .1 . for all player-1 mdps \!]_\equiv}}] such that , for all parity objectives , every pure memoryless optimal strategy in is an -optimal strategy in .in other words , for the interval , every pure memoryless optimal strategy in is an -optimal strategy in all structurally equivalent mdps of such that the distance lies in the interval . 1 . without loss of generality ,let .let be a pure memoryless optimal strategy in and such a strategy exists by theorem [ thrm_lit1 ] .then we have the following inequality & \geq & { { \mathsf{val}}}(g_1 { \upharpoonright}\stra_1,\phi)(s ) - \big((1 + { \mathit{dist}_r}(g_1,g_2))^{2\cdot|s|}-1\big ) \\[1ex ] & = & { { \mathsf{val}}}(g_1,\phi)(s ) - \big((1 + { \mathit{dist}_r}(g_1,g_2))^{2\cdot|s|}-1\big ) \end{array}\ ] ] the ( in)equalities are obtained : the first inequality follows because the value in is at least the value in obtained by fixing a particular strategy ( in this case ; the second inequality is obtained by appying theorem [ thrm_mc_multi ] on the structurally equivalent markov chains and ; and the final equality follows since is an optimal strategy in .the desired result follows .2 . let \!]_\equiv}} v_{i+1}= ( \frac{1}{2}+{\varepsilon})\cdot c_i \cdot v_i { \varepsilon}^2 { \varepsilon}^2 $ ) } \end{array}\ ] ] thus we obtain that . then we have and then and so on .finally we obtain as follows : .observe that for the markov chain with , the states and are the recurrent states , and since the chain is symmetric from ( with ) the probability to reach and must be equal and hence is .it follows that we must have .hence we have that for , but very small , .thus the difference with the value function when as compared to when but very small is . also observe that the markov chain obtained for and are structurally equivalent .thus the desired result follows . | we consider two - player stochastic games played on a finite state space for an infinite number of rounds . the games are _ concurrent _ : in each round , the two players ( player 1 and player 2 ) choose their moves independently and simultaneously ; the current state and the two moves determine a probability distribution over the successor states . we also consider the important special case of turn - based stochastic games where players make moves in turns , rather than concurrently . we study concurrent games with -regular winning conditions specified as _ parity _ objectives . the value for player 1 for a parity objective is the maximal probability with which the player can guarantee the satisfaction of the objective against all strategies of the opponent . we study the problem of continuity and robustness of the value function in concurrent and turn - based stochastic parity games with respect to imprecision in the transition probabilities . we present quantitative bounds on the difference of the value function ( in terms of the imprecision of the transition probabilities ) and show the value continuity for structurally equivalent concurrent games ( two games are structurally equivalent if the supports of the transition functions are the same and the probabilities differ ) . we also show robustness of optimal strategies for structurally equivalent turn - based stochastic parity games . finally , we show that the value continuity property breaks without the structural equivalence assumption ( even for markov chains ) and show that our quantitative bound is asymptotically optimal . hence our results are tight ( the assumption is both necessary and sufficient ) and optimal ( our quantitative bound is asymptotically optimal ) . |
quantum correlations in composite systems transcend entanglement .a bipartite quantum state can be defined as nonclassical or nonclassically correlated if it can not be expressed as a convex mixture of local basis states of subsystems and .consequently , all inseparable ( entangled ) states as well as the majority of separable states are nonclassical .general nonclassical correlations , however , can be mapped to entanglement in a very precise sense , which provides an insightful framework for their characterization and operational interpretation .specifically , it was proven in and very recently experimentally observed in that all nonclassical states of a finite - dimensional system can be turned into states with distillable entanglement between the system and a set of ancillae by an _activation protocol_. focusing on a bipartite setting , the protocol runs as follows .the subsystems and are first subject to arbitrary local unitary transformations ; then , each system interacts via a controlled - not ( cnot ) operation ( i.e. a so - called premeasurement interaction ) with an auxiliary system , , initialized in a pure state .the activation protocol then possesses two key properties : i ) for all classical states at the input of the protocol , there exist local unitaries for which the output state is separable across the splitting , and ii ) for all nonclassical states and for all local unitaries , the output state is entangled across the splitting . let us stress that both criteria i ) and ii ) must be met by any scheme in order to be a valid activation protocol .in particular , they allow us to define faithful measures of nonclassical correlations for the input state in terms of the output entanglement , minimized over .one such measure , when the output entanglement is quantified by the negativity , has been termed negativity of quantumness , and has been experimentally investigated in in this paper we study activation of nonclassical correlations in multimode bipartite gaussian states of continuous variable systems .nonclassical correlations of gaussian states have been studied extensively both theoretically and experimentally but their interplay with entanglement has not been pinned down so far in terms of the activation framework .attempts to devise activation - like protocols for gaussian states have been explored . however , these differed significantly from the original prescription in that nonunitary operations were employed between system and ancillae , so that the entanglement generation was obtained as a dynamical feature , and conditions i ) and ii ) were not generally verified . herewe consider a general gaussian activation protocol in which are gaussian unitaries and the cnot gates are replaced with a global gaussian unitary on subsystems . in section [ sec_1 ]we then prove that any such protocol satisfying condition i ) will unavoidably violate condition ii ) , which implies that activation of gaussian nonclassical correlations by gaussian operations is impossible .this fact establishes a new no - go theorem for gaussian quantum information processing , which can be enlisted alongside other well known no - go results such as the the no - distillation theorem , according to which distilling entanglement from gaussian states by using only gaussian operations is impossible .we then show in section [ sec_2 ] how , by using non - gaussian operations which properly extend the cnot to infinite dimensions , one can construct the continuous variable counterpart of the activation protocol of , verifying criteria i ) and ii ) .this allows us to define the negativity of quantumness for gaussian states and to calculate it for relevant examples in section [ sec_3 ] .this work provides an operational setting to understand and manipulate nonclassical correlations in paradigmatic infinite - dimensional systems .we draw our conclusions in section [ sec_4 ] , while some technical derivations ( which can be of independent interest ) are deferred to the appendices .gaussian states are quantum states of systems with an infinite - dimensional hilbert space ( continuous variable systems ) , e.g. a collection of harmonic oscillators , which possess a gaussian - shaped wigner function in phase space . modes are described by a vector of quadrature operators satisfying the canonical commutation rules expressible in terms of elements of the vector as =i\omega_{jk} ] , where is the squeezing parameter .hence , by a direct substitution into eq .( [ n2 ] ) we get consequently , as the output negativity is equal to the negativity of the input state , it coincides with the true optimized negativity of quantumness , and our choice of local unitaries is thus optimal for pure states .the negativity ( [ np ] ) is depicted by a solid red line in fig .[ fig2 ] .( [ np ] ) ] ( solid red line ) and its lower bound [ eq . ( [ lp ] ) ] ( dash - dotted brown line ) for pure squeezed vacuum states , plotted as a function of the local mean number of thermal photons .upper bound on the negativity of quantumness [ eq .( [ nm ] ) ] ( dashed blue line ) and its lower bound [ eq . ( [ lm ] ) ] ( dotted black line ) for separable mixed states obtained as unbiased mixtures of coherent states , plotted as a function of the local mean number of thermal photons .the inset shows a close - up for , where the lower bounds become tight.,width=321 ] these gaussian states are of the form and can be prepared by splitting a thermal state with mean number of thermal photons on a balanced beam splitter . here , and . the states are already in standard form with a cm ( [ tildegamma ] ) specified by and . making use of the components of a coherent state in fock basis we get the following matrix elements of the state ( [ mixture ] ) , where . by substitution of the latter expression into eq .( [ n2 ] ) we get after some algebra ^ 2 - 1\right\}.\ ] ] the negativity ( [ nm ] ) is depicted by a dashed blue line in fig .[ fig2 ] , and is generally smaller than the one of pure states calculated in ( [ np ] ) .both classes of gaussian states have a nonzero negativity of quantumness which increases with ; this is in agreement with earlier studies of nonclassical correlations based on entropic measures of quantum discord .in general we need the fock basis elements for an arbitrary two - mode gaussian state with zero first moments . combining the results of refs . we can express them as where is the cm of the state , is the identity matrix , and is the four - dimensional hermite polynomial at the origin ; see appendix [ secapp_2 ] for a complete derivation of eq .( [ rhohermite ] ) . here^{\dagger}v\ ] ] is the symmetric matrix defining the polynomial , where and for the standard - form cm , eq .( [ tildegamma ] ) , we get in particular with being the identity matrix , and ( ) .one can then evaluate the negativity ( [ n2 ] ) by performing a numerical summation of the absolute values of the elements ( [ rhohermite ] ) .the higher - order hermite polynomials can be calculated from the lower - order ones by using e.g. the recurrence formula derived in appendix [ secapp_2 ] .we remark that the compact expression in equation ( [ rhohermite ] ) is of independent interest and can be useful for the characterization of hybrid information processing involving conversion between continuous and discrete variable entanglement , or particularly for studies of bell nonlocality of arbitrary two - mode gaussian states by means of dichotomic pseudospin measurements , whose expectation value can be conveniently evaluated at the fock space level . in the context of the present paper , apart from the utility for numerical evaluation of the output negativity ( [ n2 ] ) , equation ( [ rhohermite ] ) also enables us to derive a simple analytical lower bound on the output negativity .the bound results from the following chain of inequalities where the first inequality follows from the inequality which holds for any , the second inequality is a consequence of the triangular inequality for absolute values , and the last equation follows from the expression for the generating function of the four - dimensional hermite polynomials at the origin , where and is the matrix ( [ rr ] ) . a comparison between the right - hand side ( rhs ) of the previous equation and the expression of the husimi -quasiprobability distribution in the fock basis further yields as can be easily seen from the results of appendix [ secapp_2 ] .therefore , the last expression in the chain of inequalities ( [ inequalities ] ) can be written in the following compact form now , making use of the inequalities ( [ inequalities ] ) and equality ( [ phia11 ] ) one finds that the sum in ( [ n2 ] ) is lower - bounded as which finally gives the following bound on the output negativity ( [ n2 ] ) .\ ] ] the bound ( [ l ] ) can be evaluated for any zero - mean two - mode gaussian state with cm by calculating the matrix ( [ rr ] ) and substituting it into the formula ( [ phia11 ] ) . to test the tightness of the boundwe calculate it for the previous examples of pure states and mixtures of coherent states , and compare the obtained lower bounds with the exact values of the negativities ( [ np ] ) and ( [ nm ] ) , respectively .the cm is in the standard form ( [ tildegamma ] ) in both cases and therefore one can evaluate easily the matrix ( [ rr ] ) using eqs .( [ rst ] ) and ( [ rj ] ) which gives , after substitution into eq .( [ phia11 ] ) , for pure states , and for unbiased mixtures of coherent states .the corresponding negativities then satisfy \equiv\mathcal{l}_{\rm p}\ ] ] and the bounds and as well as the negativities , eq .( [ np ] ) , and , eq .( [ nm ] ) , are depicted in fig .the figure shows that both bounds are tight in the region of small ( see the inset ) , which also proves that eq .( [ nm ] ) amounts to the exact value of the negativity of quantumness for mixtures of coherent states with small mean number of thermal photons in each mode .both lower bounds are then shown to increase with increasing and the gap between the bounds and the numerically evaluated values of the output negativities gets larger .further analysis reveals however that the lower bounds and are nonmonotonic for larger ; they both attain a maximum at and , respectively , and then both monotonically decrease for larger values of ; eventually , both lower bounds become trivial as they enter the region of negative values , namely for and for . as a final remark , note that the sum in negativity ( [ n2 ] ) just amounts to the so - called -norm of the density matrix , i.e. , .the results of the present section thus also describe how to calculate numerically the -norm for an arbitrary two - mode gaussian state with zero means and the inequality ( [ sumbound ] ) gives a simple analytical lower bound on such a norm .we have shown that a protocol capable of activating nonclassical correlations in bipartite gaussian states based solely on gaussian operations can not exist .we have also constructed a non - gaussian activation protocol and we have investigated quantitatively its performance using the negativity of quantumness as a figure of merit .our analysis suggests that optimal performance of the protocol is achieved if the input gaussian state is in the standard form .restricting to the local gaussian unitaries the conjecture can be proved or disproved with the help of eq .( [ rhohermite ] ) by numerical minimization of the negativity ( [ n2 ] ) with respect to the unitaries , which is left for further research .we believe that our results will stimulate further exploration of the negativity of quantumness and its interplay with other nonclassicality indicators in the context of gaussian states .l. m. acknowledges the project no .p205/12/0694 of gar and the european social fund and msmt under project no .d. m. acknowledges the support of the operational program education for competitiveness project no .cz.1.07/2.3.00/20.0060 co - financed by the european social fund and czech ministry of education .g. a. acknowledges the brazilian agency capes [ pesquisador visitante especial - grant no .108/2012 ] and the foundational questions institute [ grant no .fqxi - rfp3 - 1317 ]. g. a. would also like to thank m. barbieri and m. piani for discussions .this section is dedicated to the proof that a bipartite gaussian state of an -mode subsystem and an -mode subsystem is classically correlated across the splitting if and only if it is a product state .the proof of the `` only if '' part is trivial because any product state is diagonal in the product of eigenbases of local states .the `` if '' part can be proved using the necessary and sufficient condition for zero quantum discord . quantum discord of a quantum state with a measurement on subsystem is zero if an only if the state can be expressed as where is an orthonormal basis of subsystem .the zero - discord criterion then says that a quantum state can be expressed in the form ( [ qcstate ] ) if and only if for an informationally complete positive operator valued measurement ( ic - povm ) on subsystem , the conditional states of subsystem corresponding to the measurement outcomes , mutually commute , i.e. , =0,\quad \mbox{for all and }.\ ] ] we consider a gaussian state with zero means and covariance matrix ( cm ) . modes comprising the subsystem are subject to a gaussian measurement characterized by a cm and a vector of measurement outcomes .if a measurement outcome occurs then the state collapses into the -mode state of subsystem with cm and vector of first moments of the form where and are blocks of the cm expressed with respect to the splitting , as in ref . we will now express criterion ( [ criterion ] ) in terms of the characteristic function .for this purpose we will first use the fact that an -mode quantum state can be expressed as where is the characteristic function of the state and is the displacement operator with and is the vector of quadratures . due to the validity of the relation =(2\pi)^{m}\delta(\xi-\xi') ] given by (\xi)\right\}.\ ] ] by inserting the rhs of the commutator from eq .( [ commutator ] ) into eq .( [ ckkprimed ] ) , using eq .( [ wdagw ] ) and carrying out the integration , we arrive at the characteristic function ( [ ckkprimed ] ) in the form .\ ] ] from eq .( [ ckkprimed ] ) and the formula =\frac{1}{(2\pi)^{m}}\int_{\mathbb{r}_{2m}}c_{kk'}(\xi)w^{\dag}(\xi)d\xi\ ] ] it follows that =0 k k'$}.\ ] ] like in the previous case we can express the latter condition in terms of a characteristic function .we can proceed exactly along the same lines as in the case of the commutator ( [ commutator ] ) with the only difference that now we consider measurement on the -mode subsystem .consequently , the formulas which we get for the present case of the commutator ( [ commutator2 ] ) are obtained from the formulas derived in the context of commutator ( [ commutator ] ) by the replacements , of the blocks of the matrix and by the replacement .thus we find that the commutator ( [ commutator2 ] ) vanishes if and only if .therefore , the condition is necessary and sufficient for an -mode gaussian state to be classical , which concludes our proof .our aim is to express the elements of a density matrix of a gaussian state of two modes and in the fock basis .here and in what follows we assume that the state has all first moments equal to zero .the present derivation combines the results obtained in refs .firstly we express the elements of the density matrix in the basis of coherent states as where we have used the expression of the components of a coherent state in the fock basis the matrix element on the lhs of eq .( [ cohelements ] ) can be further expressed as where ^{-1}\alpha}\ ] ] is the husimi -quasiprobability distribution of the gaussian state . here, and is the complex cm corresponding to antinormal ordering of the canonical operators .substituting now from eq .( [ lhs ] ) into the lhs of eq .( [ cohelements ] ) and making use of eq .( [ phia ] ) we arrive at the following equality ^{-1}-\openone\right\}\alpha}\\ & & \quad = \sum_{m_1,m_2,n_1,n_2=0}^{\infty}\frac{{\alpha_1^*}^{m_1}{\alpha_2^*}^{m_2}\alpha_1^{n_1}\alpha_2^{n_2}}{\sqrt{m_1!m_2!n_1!n_2!}}{\langlem_1m_2|}\rho{|n_1n_2\rangle}. \nonumber\end{aligned}\ ] ] the lhs of the latter equation can be expressed in terms of the multi - dimensional hermite polynomials . specifically , the generating function of the four - dimensional hermite polynomials is where , , and is a symmetric matrix of order four .the lhs of eq .( [ focksum ] ) then can be rewritten in terms of the lhs of eq .( [ generatingfunction ] ) as follows .the complex cm can be expressed as where is the identity matrix , is a unitary matrix , and is the standard real symmetrically ordered cm of the state , with elements , , where is the -th component of the vector of quadratures .hence we get ^{-1}-\openone = o\left[\left(\gamma+\frac{1}{2}\openone\right)^{-1}-\openone\right]o^{\dagger}.\ ] ] furthermore , we can write where consequently , ^{-1}-\openone\right\}\alpha = h^t rh,\ ] ] where ^{\dagger}v.\ ] ] as and the cm is symmetric , one finds immediately that and therefore is symmetric as required . making use of eqs .( [ focksum ] ) and ( [ generatingfunction ] ) we get where the matrix defining the hermite polynomial is given in eq . ( [ r ] ) . by equating each term in the summationwe are left with the elements of the density matrix in the fock basis , where equation ( [ rhofock ] ) allows us to calculate any element of a density matrix in the fock basis for an arbitrary two - mode gaussian state with zero first moments . to calculate matrix ( [ r ] )it is convenient to express the cm in the block form this allows us to express the inverse matrix , appearing in eq .( [ r ] ) , in block form using the following blockwise inversion formula , higher - order hermite polynomials can be calculated from lower - order polynomials using a recurrence relation .it is derived from the generating function ( [ generatingfunction ] ) , where we set . by deriving both sides of the equation ( [ generatingfunction ] ) with respect to the -th element of the vector , substituting the rhs of eq .( [ generatingfunction ] ) for the exponential function appearing on the lhs of the obtained expression and equating each term in the summation , we arrive at the following recurrence relation where is the four - dimensional hermite polynomial at the origin with multi - index .the coefficients correspond to the -th element of the matrix , eq .( [ r ] ) , and is the -th canonical basis vector with 1 in the -th component and zeros everywhere else . here ,any hermite polynomial with a negative index is zero , i.e. for all with for some .every hermite polynomial at the origin can be found from the latter recurrence formula and by using the first few cases , with .these can be derived by a direct calculation from the expression found in .note that it is sufficient to calculate only the polynomials where the parity of the multi - index is even .when the parity of the multi - index is odd , i.e. , where , then .g. adesso , v. dambrosio , e. nagali , m. piani , and f. sciarrino , phys .* 112 * , 140501 ( 2014 ) .g. vidal and r. f. werner , phys .a * 65 * , 032314 ( 2002 ) .t. nakano , m. piani , and g. adesso , phys .a * 88 * 012117 ( 2013 ) .i. a. silva _et al . _ , phys . rev .110 * , 140501 ( 2013 ) ; f. m. paula _ et al .lett . * 111 * , 250401 ( 2013 ) .g. adesso and f. illuminati , j. phys .a : math . theor . * 40 * 7821 , ( 2007 ) ; c. weedbrook _ et al . _ rev .phys . * 84 * , 621 ( 2012 ) ; g. adesso , s. ragy , and a. r. lee , open syst . inf . dyn . * 21 * , 1440001 ( 2014 ) .l. s. madsen , a. berni , m. lassen , and u. l. andersen , phys .lett . * 109 * , 030402 ( 2012 ) ; r. blandino _ et al .lett . * 109 * , 180402 ( 2012 ) ; m. gu , _ et al ._ , _ nature phys ._ * 8 * , 671 ( 2012 ) .s. rahimi - keshari , c. m. caves , and t. c. ralph , phys .a * 87 * , 012119 ( 2013 ) .r. f. werner and m. m. wolf , phys .lett . * 86 * , 3658 ( 2001 ) .a. peres , phys .* 77 * , 1413 ( 1996 ) .r. simon , phys .lett . * 84 * , 2726 ( 2000 ) . v. v. dodonov , v. i. manko , and v. v. semjonov , nuovo cimento b * 83 * , 145 ( 1984 ) .v. v. dodonov , o. v. manko , and v. i. manko , phys .a * 50 * , 813 ( 1994 ) .j. fiurek and j. peina , quantum statistics of light propagating in nonlinear optical couplers . in j. peina ,editor , _ coherence and statistics of photons and atoms _ , chapter 2 , pages 65 - 110 .j. wiley , new york , 2001 . , edited by a. erdlyi ( mcgraw - hill , new york , 1953 ) .a. datta , eprint - arxiv:0807.4490 .g. giedke and j. i. cirac , phys . rev .a * 66 * , 032316 ( 2002 ) .g. m. dariano , p. perinotti , and m. f. sacchi , j. opt .b * 6 * , s487 ( 2004 ) .b. bylicka and d. chruscciski , phys .a * 81 * , 062102 ( 2010 ) .j. peina , _ quantum statistics of linear and nonlinear optical phenomena _( kluwer , dordrecht , 1991 ) .r. a. horn and c. r. johnson , _ matrix analysis _ ( cambridge university press , cambridge , england , 1985 ) . | we study general quantum correlations of continuous variable gaussian states and their interplay with entanglement . specifically , we investigate the existence of a quantum protocol activating all nonclassical correlations between the subsystems of an input bipartite continuous variable system , into output entanglement between the system and a set of ancillae . for input gaussian states , we prove that such an activation protocol can not be accomplished with gaussian operations , as the latter are unable to create any output entanglement from an initial separable yet nonclassical state in a worst - case scenario . we then construct a faithful non - gaussian activation protocol , encompassing infinite - dimensional generalizations of controlled - not gates to generate entanglement between system and ancillae , in direct analogy with the finite - dimensional case . we finally calculate the negativity of quantumness , an operational measure of nonclassical correlations defined in terms of the performance of the activation protocol , for relevant classes of two - mode gaussian states . |
the topological and dynamical aspects of complex networks have been the focus of intensive research during the last years .an open and unsolved problem in network and computer science is the following question : how to cover a network with the fewest possible number of boxes of a given size ? in a complex network , a box size can be defined in terms of the chemical distance , , which corresponds to the number of edges on the shortest path between two nodes .this means that every node is less than edges away from another node in the same box .here we use the burning approach for the box covering problem , thus the boxes are defined for a central node or edge . instead of calculating the distance between every pair of nodes in a box , the maximal distance to the central node or edge distance can then be related to the size of the box for a central node and for a central edge .the maximal chemical distance within a box of a given size is for a central node and for a central edge .although this problem can be simply stated , its solution is known to be np - hard .it can be also mapped to a graph coloring problem in computer science and has important applications , e.g. , the calculation of fractal dimensions of complex networks or the identification of the most influential spreaders in networks . herewe introduce an efficient algorithm for fractal networks which is capable to determine the minimum number of boxes for a given parameter or .moreover , we compare it for two benchmark networks with a standard algorithm used to approximately obtain the minimal number of boxes . in principle , the optimal solution should be identified by testing exhaustively all possible solutions . nevertheless , for practical purposes , this approach is unfeasible , since the solution space with its solutions is too large .present algorithms like maximum - excluded - mass - burning and merging algorithms are based on the sequential addition of the box with the highest score , e.g. , the score is proportional to the number of covered nodes , and the boxes with the highest score are sequentially included .other algorithms are based on simulated annealing , but without the guarantee of finding the optimal solution .even greedy algorithms end up with a similar number of boxes as the algorithms mentioned before .the greedy algorithm sequentially includes a node to a present box , if all other nodes in this box are within the chemical distance and if there is no such box , a new box with the new node is created .it is therefore believed that the results are close to the optimal result , although the real optimal solution is unknown .+ this paper is organized as follows . in sectionii , we introduce the algorithm and then explain the main difference between the present state of the art algorithm and our optimal algorithm for a given distance . in section iii , results for two benchmark networks are presented and the improvement in performance of our algorithm is quantitatively shown . finally , in section iv , we present conclusions and perspectives for future work .we use two slightly different algorithms for the calculation of the optimal box covering solution , one for odd values of and another for even values . to get the results for an odd value ,the following rules are applied : + 1 . create all possible boxes : for every node create a box containing all nodes that are at most edges away .node is called center of the box . an example is shown in fig . [ fig:1]a .2 . remove unnecessary boxes : search and remove all boxes which are fully contained in another box ( see fig . [fig:1]b ) .3 . remove unnecessary nodes : for every node , check all the boxes containing : .if another node is contained in all of these boxes , remove it from _ all _ boxes ( see fig .[ fig:1]c ) .4 . remove pairs of unnecessary twin boxes : find two nodes which are both in exactly two boxes of size two : , and , . if and , then and can be removed . if and , then and can be removed .an example for this rule is shown in fig .[ fig:5 ] .note that such twin boxes also appear for due to the removal of unnecessary nodes .search for boxes that must be contained in the solution : add all boxes to the solution , which have a node only present in this box .remove all nodes covered by from other boxes .iterate a : repeat 2 - 5 until there is no node which is covered by a single box and is not part of the solution .system split : identify if the remaining network can be divided into subnetworks , such that all boxes in a subnetwork contain only nodes of this subnetwork .then these subnetworks can be processed independent from each other .system split : find the node which is in the smallest number of boxes , each of these boxes covers another set of nodes .if there is more than one node fulfilling this criterion , chose the node which is covered by the largest boxes .then the algorithm is divided into sub - algorithms , which can be independently calculated in parallel . by removing from each of the sub - algorithm another set of nodes ,all possible solutions are considered .an example for the splitting is shown in fig .[ fig:6 ] .since we want to identify only one optimal solution , we do not need to calculate the results of all sub - algorithms .as soon as one of the sub - algorithms identifies an optimal solution , we can skip the calculation of the others . furthermore , the calculation of a sub - algorithm can be skipped , if the minimal number of required additional boxes reaches the number of the , so far , best solution of a parallel sub - algorithm .iterate b : repeat 2 - 8 until no nodes are uncovered .identify the best solution : chose the solution with the lowest number of boxes .this solution is optimal for a given .+ to get the results for an even value of the first step is slightly different : 1 . create all possible boxes : for every _ edge _ create a box containing all nodes that are at most _ nodes _ away ._ edge _ is called center of the box .all other steps are the same as for the odd case .note that the calculation for odd values scales with the number of nodes of the network and with the number of edges for even values .instead of sequentially including boxes , the idea of our algorithm is to remove all non - optimal boxes from the solution space ending up with a final , optimal solution . to reduce the huge solution space ,our box covering algorithm uses two basic ingredients : 1 ) unnecessary boxes from the solution space are discarded and the boxes which definitively belong to the solution are kept .2 ) unnecessary nodes from the network are discarded .these two steps reduce the solution space of a wide range of network types significantly , specially if they are applied in alternation as the removal of a box can lead to the removal of nodes and other boxes and vice - versa .nevertheless these two steps do not necessarily lead to the optimal solution , thus the solution space has to be split into several possible sub - solution spaces . in each of these sub - solutionsthe first two steps are repeated .note that the splitting does not reduce the number of possible solutions , thus only the first two steps reduce the solution space and in the worst case , the algorithm must calculate the entire solution space .in any case , for many complex networks iterating these three steps significantly reduces the solution space to a few solutions from which the optimal box covering can be obtained .+ the remaining question is how to judge whether a box or node is necessary or unnecessary . on the one handa box is unnecessary if all nodes of a box are also part of another box .this box can be removed , because the other box covers at least the same nodes and often additional nodes . on the other handa box is necessary if a node is exclusively covered by this single box .this box has to belong to the solution , since only if the box is part of the solution , the node is covered .+ in contrast , nodes can easily be identified as unnecessary .for example all nodes of a box , which is part of the solution , can be removed from all other boxes , since they are already covered .additionally , if a node shares all boxes with another node , the other node can be removed , since the second node is always covered , if the first node is covered .these few rules are in principle sufficient to get the optimal solution , since our algorithm starts with _ all _ or ( for central edges ) possible solutions and discards unnecessary and includes necessary boxes .+ although we only calculate results for undirected , unweighted networks , the algorithm can easily be extended to directed and weighted networks . in both casesonly the initial step , the creation of boxes , is different . for directed networks, the box around a central node contains all nodes which are reachable with respect to the direction , while for weighted networks , the distance is the sum of the edge weights between the nodes .+ next we show that our algorithm can also identify optimal solutions for large networks .therefore , we have applied it to two different benchmark networks , namely the _ e. coli _ network , with 2859 proteins and 6890 interactions between them , and the www network .we compare the results for the minimal box number of our algorithm for different values of box sizes with the results of the greedy graph coloring algorithm * ? ? ?* , as displayed in fig .[ fig : cellular ] .while the absolute improvement is rather small , the relative improvement is up to larger for .if the network is fractal , it should obey the relation , where is the fractal dimension .interestingly , it seems that the fractal dimension from the greedy algorithm and from our optimal algorithm of the network is nearly unaffected by the choice of the algorithm . note that for , due to the fact that the boxes are calculated based on the definition of a central node or edge , we have one more box .the simplest case where such difference occurs is in a chain of four connecting nodes ( 1 - 2 , 2 - 3 , 3 - 4 , 4 - 1 ) .all nodes have the chemical distances of two to each other ( ) , however it is not possible to draw a box around a node with radius one ( ) , which contains all nodes .+ the second example is the www network , containing 325729 nodes and 1090108 edges .as in the previous case , our algorithm outperforms the state of the art algorithm , but yields similar fractal behavior , as shown in fig .[ fig : www ] . for intermediate box sizes , we have a large improvement since up to and up to fewer boxes are needed .for we have two box more , like in the _ e. coli _ network case due to the two definitions of the box covering problem , while for larger both algorithm give similar results .interestingly , it seems that the improvement for even distances ( for central edges ) is significantly larger than for odd distances ( for central nodes ) .[ fig : www1 ] we show the influence of the sequence of adding nodes to the boxes on the results of the greedy algorithm . while the results of fig .[ fig : www ] are the minimal values obtained from 50 independent starting sequences , we calculated 1500 realizations for a single box size .the difference between the improvement is with and rather small .the gap between the optimal solution and the greedy algorithm is too large , thus for practical purposes , the greedy algorithm will never find the optimal solution for this box size .+ the results for these two benchmark networks demonstrate that our algorithm is more effective than the state of the art algorithms .nevertheless , due to the rapid decay of the number of boxes for larger box sizes , the fractal dimension of the two benchmark networks is only slightly different when using the optimal box - covering algorithm in comparison with other algorithms .in closing , we have presented a box - covering algorithm , which outperforms the known previous ones .we have also compared our algorithm with the state of the art methods for different benchmark networks and detected substantial improvements .moreover the obtained solutions are optimal as a result of the algorithm design , if the box size is defined as the maximal distance to the central node or edge .for example , our approach can be useful for designing optimal commercial distribution networks , where the shops are the nodes , the storage facilities the box centers and the radius is related to the boundary conditions , like transportation cost or time .we acknowledge financial support from the eth competence center coping with crises in complex socio - economic systems ( ccss ) through eth research grant ch1 - 01 - 08 - 2 and by the swiss national science foundation under contract 200021 126853 .we also thank the brazilian agencies cnpq , capes , funcap and the inst - sc for financial support .18 d. watts and s. strogatz , _ nature ( london)_*393 * 440 ( 1998 ) .r. albert , h. jeong and a .-barabsi , _ nature _ * 401 * 130 ( 1999 ) . m. bartheemy and l. a. n. amaral , _ phys .rev . lett._*82 * 5180 ( 1999 ) .a.l . lloyd and r.m .may , _ science _ * 292 * , 1316 - 1317 ( 2001 ) .r. cohen , s. havlin , and d. ben - avraham , _ phys .rev . lett._*91 * 247901 ( 2003 ) .m. barthelemy , a. barrat , r. pastor - satorras , and a. vespignani , _ phys .lett . _ * 92 * , 178701 ( 2004 ) .gonzlez , p.g .lind , h.j .herrmann , _ phys .rev . lett._*96 * 088702 ( 2006 ) .l.k . gallos , c. song , s. havlin , h.a .makse , _ proc .acad . sci._*104 * 7746 ( 2007 ) .moreira , j.s .andrade jr . , h.j . herrmann and j.o .indekeu , _ phys .rev . lett._*102 * 018701 ( 2009 ) .h. hooyberghs , b. van schaeybroeck , a.a .moreira , j.s .andrade jr . , h.j . herrmann and j.o .indekeu , _ phys .rev . e_*81 * 011102 ( 2010 ) .g. li , s.d.s .reis , a.a .moreira , s. havlin , h.e .stanley and j.s .andrade jr . , _ phys .rev . lett._*104 * 018701 ( 2010 ) . h.j .herrmann , c.m .schneider , a.a .moreira , j.s .andrade jr . ands. havlin , _ j. stat .* p01027 * ( 2011 ) .schneider , a.a .moreira , j.s .andrade jr ., s. havlin and h.j .herrmann , _ proc .acad . sci._*108 * 3838 ( 2011 ) .schneider , t. mihaljev , s. havlin , h.j .herrmann , _ phys .rev . e_*84 * 061911 ( 2011 ) .a. vespignani , _ nature physics_*8 * 32 ( 2012 ) .peitgen , h. jrgens and d. saupe , chaos and fractals : new frontiers of science ( springer)(1993 ) .j. feder , fractals ( plenum press ) ( 1988 ) .a. bunde and s. havlin ( eds . ) , fractals in science ( berlin : springer - verlag ) ( 1995 ) .jensen and b. toft ( eds . ) , graph coloring problems ( new york : wiley - interscience ) ( 1995 ) t.h .cormen , c.e .leiserson , r.l .rivest and c. stein , introduction to algorithms ( mit press ) ( 2001 ) .c. song , s. havlin and h.a .makse , _ nature _ * 433 * 392 - 395 ( 2005 ) .c. song , l.k .gallos , s. havlin and h.a .makse , _ j. stat .mech . _ * 03006 * ( 2007 ) .garey and d.s .johnson , computers and intractability ; a guide to the theory of np - completeness ( new york : w.h .freeman ) ( 1979 ) s.h .yook , f. radicchi and h. meyer - ortmanns,_phys .e _ * 72 * 045105 ( 2005 ) .g. palla , i. dernyi , i. farkas and t. vicsek _ nature _ * 435 * 814 ( 2005 ) .zhao , h.j .yang and b. wang , _ phys .rev . e_*72 * 046119 ( 2005 ) .c. song , s. havlin and h.a .makse , _ nature physics _ * 2 * 275 - 281 ( 2006 ) .goh , g. salvi , b. kahng and d. kim , _ phys .lett . _ * 96 * 018701 ( 2006 ) .a.a . moreira , d.r .paula , r.n .costa filho and j.s .andrade jr . , _ phys .e _ * 73 * 065101 ( 2006 ) .m. kitsak , l. k. gallos , s. havlin , f. liljeros , l. muchnik , h. e. stanley , h.a .makse , _ nature physics _* 6 * 888 - 893 ( 2010 ) .m. locci , g. concas , r. tonelli and i. turna , _ wseas trans ._ * 7 * 371 - 380 ( 2010 ) .zhou , z.q .jiang and d. sornette , _ physica a _ * 375 * 741 - 752 ( 2007 ) . http://lev.ccny.cuny.edu//methods/methods.html . | the self - similarity of complex networks is typically investigated through computational algorithms the primary task of which is to cover the structure with a minimal number of boxes . here we introduce a box - covering algorithm that not only outperforms previous ones , but also finds optimal solutions . for the two benchmark cases tested , namely , the _ e. coli _ and the www networks , our results show that the improvement can be rather substantial , reaching up to in the case of the www network . |
starting more than a decade ago considerable observational evidence has been obtained indicating that planets are able to exist in stellar binary ( and higher order ) systems ; see results and discussions by , e.g. , , , and .these observations are in line with the empirical finding that binary ( and higher order ) systems occur in high frequency in the local galactic neighborhood . for example , presented results of a detailed analysis of companions to solar - type stars , based on a sample size of 454 , and concluded that the overall fractions of double and triple systems are about 33% and 8% , respectively , if all confirmed stellar and brown dwarf companions are accounted for .updated results were meanwhile given by .this study shows that 57 exoplanet host stars are identified having a stellar companion .the fairly frequent occurrence of planets in binary systems is furthermore consistent with the presence of debris disks in a considerably large number of main - sequence star binary systems ( e.g. , * ? ? ?* ) . in principle , as discussed by , planets in binary systems can be identified through two different venues : first , binaries or multiple star systems can be surveyed for the presence of planets by utilizing the established detection methods .second , stars with detected planets can be scrutinized afterward to check if they possess one or more widely separated stellar companion(s ) ; in this case , the planet(s ) will also be categorized as belonging to a binary ( or higher order ) system . from the view point of orbital mechanics , there are two different kinds of possible orbits ( notwithstanding positions near the lagrangian points l and l ) for planets in binary systems : s - type and p - type orbits .a p - type orbit is given when the planet orbits both binary components , whereas in case of an s - type orbit the planet orbits only one of the binary components with the second component behaving as a perturbator . presented a list of 15 planet - bearing binary systems with all planets in s - type orbits .they constitute mostly wide binaries with separation distances of up to au ; however , smaller separation distances on the order of 20 au or less have also been identified . in the meantime , systems with planets in p - type orbits have also been identified .arguably , the most prominent case is kepler-16 , as reported by and previously suggested by , containing a saturnian mass circumbinary planet . subsequently studied this system regarding the possibility of habitable exoplanets and habitable exomoons .recently , a transiting circumbinary multiplanet system , i.e. , kepler-47 , has also been identified . there is a significant body of literature devoted to the study of habitability / h / n atmosphere ( see sect . 2 for details ) .more sophisticated approaches to habitability have been given in the meantime taking into account additional aspects , such as the planet s size and mass , atmospheric structure and composition , magnetic field , geodynamic properties , ionizing stellar uv and x - ray fluxes , and tidal locking ( if existing ) ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . additionally ,as pointed out by , planets with sufficiently thick atmospheres may remain habitable even when temporarily absent from their hzs due to orbits of considerable ellipticity .this possibility is disregarded in the following as well , as planets will be required to stay permanently in the chz , ghz , or ehz ( see sect . 2 for definitions ) , as applicable , to be considered habitable .] in binary systems as well as in multiplanetary systems , which often also encompass stellar evolutionary considerations .examples include the work by , , , , , , , , and .an important aspect that has received increased recognition in the literature is that in order for habitability to exist and to be maintained , a joint constraint that includes both orbital stability and a habitable environment for a system planet through the stellar radiative energy fluxes needs to be met .in the framework of this paper , the zone related to this latter requirement will subsequently be referred to as _ radiative habitable zone _ ( rhz ) , which constitutes a necessary , though often insufficient , condition for the existence of circumstellar habitability . previous work , mostly concentrated on the existence of habitability in single star multi - planetary systems , rendered the publication of detailed stability catalogs " for the habitability zones of extrasolar planetary systems ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) .for example , quantified the dynamical habitability of 85 planetary systems by considering the perturbing influence of giant planets beyond the traditional hill sphere for close encounters with the theoretical terrestrial planets .they concluded that a significant fraction of the identified extrasolar planetary systems are unable to harbor habitable terrestrial planets .a statistical study on the stability of earth - mass planets orbiting solar - mass stars in presence of stellar companions focusing on both the statistical properties of ejection times and the general prospects of planetary habitability was given by .additional work providing stability assessments for various observed extrasolar planetary systems based on detailed stability maps was given by . explored the orbital stability or planets in double - planet systems for binaries by supplying an analytic framework based on secular perturbation theory ; they also provided dynamical classification categories .additional stability analyses to assess the habitability of planetary systems based on detailed numerical simulations were given by and ; note that the study of also dealt with a limited cases of planets in double star systems in orbit either around one stellar component ( s - type ) or around both components ( p - type ) . the study of planetary dynamics and habitable planet formation has meanwhile been described by , e.g. , and .they show that earth - mass planets are , in principle , able to form in stellar binary systems , although many details of the relevant processes are not fully understood .the overarching conclusion of those investigations is that habitable planets in stellar binary ( and , as anticipated , in higher - order systems ) are , in general , possible , which is a stark motivation for providing a comprehensive study of s - type and p - type habitability in binary systems .the approach adopted in this study will be entirely analytic .specifically , it will consider both s - type and p - type habitable orbits in the view of the joint constraint including orbital stability and a habitable region for a system planet through the appropriate amount of the stellar radiative energy fluxes . in an earlier study, focused on s - type habitability in binary systems taking into account both circular and elliptical orbits for the stellar binary components ; this latter aspect is however beyond the scope of the present work as we solely focus on systems in circular orbits .numerical studies for p - type habitable environments with applications to kepler-16 , kepler-34 , kepler-35 and kepler-47 have been given by .our paper is structured as follows : in sect . 2, we comment on the adopted main - sequence star parameters and single star habitability . in sect . 3, we introduce our theoretical approach suitable for stellar systems of the order of , although our focus will be on binary systems . in this regardboth s - type and p - type orbits will be examined , and detailed mathematical criteria for the existence of s - type and p - type rhzs will be derived . in sect . 4 , we consider the additional constraint of planetary orbital stability for the establishment of circumstellar habitability .applications regarding s - type and p - type systems are given in sect .5 , whereas the habitability classifications s , p , st , and pt are introduced in sect .section 7 conveys our summary and conclusions .in this study , s - type and p - type habitability is investigated mostly pertaining to standard ( i.e. , theoretical ) main - sequence stars .the adopted stellar parameters , which are the stellar effective temperatures , the stellar radii ( which together allow to define the stellar luminosities ) , and the stellar masses are mainly based on the work by ( see his table b.1 ) that assumes detailed photospheric spectral analyses . for stellar spectral types with no data available , the missing data were computed by employing biparabolic interpolation .the exception , however , are data for stars of spectral type k5 v and below . in this regard we relied on the results from the spectral models of r. l. kurucz and collaborators .they took into account hundreds of millions of spectral lines for a large set of atoms and molecules ; see and for details .the effective temperatures implied by these models are in close similarity to those given by for most types of stars ; however , reports consistently higher effective temperatures for stars of spectral type late - k and m ; for the latter , the difference amounts to nearly 300 k. table 1 depicts the stellar parameters adopted for the present work .an alternative approach expected to provide very similar results for either the stellar luminosity or the stellar mass ( with the other parameter taken as fixed ) is the employment of a mass luminosity relationships applicable to main - sequence stars .the work by , as well as data from subsequent studies , yield with and for and and for . at the high - mass end , this relationship holds until about .it also becomes increasingly inaccurate for low - mass m dwarfs .fortunately , the domains of applicability for eq .( 1 ) is consistent with most studies of binary habitability ; see , e.g. , as example .next we focus on single star habitability , i.e. , the evaluation of various limits of habitable zones ( hzs ) , which in the solar case shall be referred to as .previous work by , e.g. , distinguished between conservative ( chz ) and the generalized habitable zone ( ghz ) , which can also be evaluated for general main - sequence stars , and other types of stars as well .for the sun , the limits of the chz are given as 0.95 and 1.37 au ( = 2 and 4 , respectively ) , whereas for the ghz , they are given as 0.84 and 1.67 au ( = 1 and 5 , respectively ) ; see table 2 .the physical significance of the various kinds of hzs obtained by can be summarized as follows : the ghz is defined as bordered by the runaway greenhouse effect ( inner limit ) and the maximum greenhouse effect ( outer limit ) . concerning the latter it is assumed that a cloud - free co atmosphere is still able to provide a surface temperature of 273 k. the inner limit of the chz is defined by the onset of water loss . in this case , a wet stratosphereis assumed to exist where water is lost by photodissociation and subsequent hydrogen escape to space .furthermore , the outer limit of the chz is defined by the first co condensation attained by the onset of formation of co clouds at a temperature of 273 k ; see , e.g. , and for further details .table 3 conveys the results for the hzs for the different types of main - sequence stars of the present study with the different limits referred to as hz . for the outer edge of circumstellar habitability ,even less stringent limits have been introduced in the meantime ( e.g. , * ? ? ?* ; * ? ? ?they are based on the assumption of relatively thick planetary co atmospheres as well as strong backwarming that may further be enhanced by co crystals and clouds .these limits , which in case of the sun correspond to 2.4 au , conform to the extended habitable zone ( ehz ) , have also been taken into account in our study , although the significance of the ehz has meanwhile been criticized as a result of detailed planetary radiative transfer models . moreover , in the framework of the present study , we also consider planetary earth - equivalent positions defined as and labelled as ; see tableit is meant as an intriguing reference distance of habitability both regarding single stars and stellar binary systems .next we introduce the governing equations for investigating the rhzs of binary systems pertaining to both s - type and p - type orbits .this approach targets the requirement of providing a habitable region for a system planet based on the radiative energy fluxes of the stellar components .the requirement of planetary orbital stability will be disregarded for now ; it will be revisited in sect .the importance of orbital stability for allowing circumstellar habitability in stellar binaries will , however , be considered in an appropriate and consistent manner in the main body of the study . for a star of luminosity , given in units of solar luminosity , the distance of the habitability limit as identified for the sun , which may constitute either an inner or outer limit of habitability ( except ) , is given as in case of a multiple star system of order with distances , the limit of habitability related to is given as in eq .( 2 ) and ( 3 ) , ( see table 1 ) describes the stellar flux in units of the solar constant that is a function of the stellar effective temperature ( e.g. , * ? ? ?* ; * ? ? ?specifically , using the formalism represents the normalized stellar flux in units of the solar constant , 1368 w m , given by the stellar spectral energy distribution .therefore , ordinarily , no dependence for should exist .however , the formulae by utilize previous results by who provided numerical values for limits of habitability for different types of stars considering various limit definitions ( i.e. , values identified for the sun ) . but used for the solar effective temperature an unusually low value of 5700 k instead of 5777 k as currently accepted ( e.g. , * ? ? ? * ) . hence , transforming the polynomial fit based on the work by aimed at considering the correct solar effective temperature renders a weak dependence on for the values .in contrast , the method by provides a polynomial fit for without considering the solar temperature revision .an alternative method has been used by and subsequent work . in this approachthe polynomial fit by is corrected via a triangular function based on data for stars of spectral type f0 v , g0 v , and k0v. as a result the corresponding values do also not depend on .] by , we find that with , , and in au , and in k. also found that for , corresponding to inner limits of habitability , the fitting parameters are given as and , whereas for , corresponding to outer limits of habitability , they are given as and ; note that corresponds to the customary notion of earth - equivalent positions .appropriate values for are given in table 2 . in the followingwe will focus on the case of binary systems , i.e. , . in this case( 3 ) reads with here denotes the semidistance of binary separation , the distance of a position at the habitability limit contour ( which later on will be referred to as radiative habitable limit " , see below ) , and the associated angle ; see fig . 1 for information on the coordinate set - up for both s - type and p - type orbits .we will also assume without loss of generality .with defined as henceforth referred to as _recast stellar luminosity _( see table 4 ) , is given as with equation ( 8) constitutes a fourth - order algebraic equation that is known to possess four possible solutions , although some ( or all ) of them may constitute unphysical solutions , i.e. , having a complex or imaginary value .the adopted coordinate system constitutes , in essence , a polar coordinate system except that negative values for are permitted ; in this case the position of is found on the opposite side of angle . in principle, it is possible to consider for eq .( 8) to only have solutions for given as ; in this case the entire interval for , which is , needs to be examined .the following types of solutions are identified : for s - type orbits , two solutions exist in the intervals centered at and at ( or one coinciding solution at each tangential point ) ; see fig . 1 .however , there will be no solution in the typically relatively large intervals containing and .clearly , the size of any of those intervals critically depends on the system parameters , , , and , as expected . for p - type orbits , on the other hand, there will be one solution for each value of in the range of .however , in general , negative values for the solutions of also exist . if taken into account , it will be sufficient to restrict the evaluation of eq .( 8) to the range . in this case , for s - type orbits , there will be four solutions in the intervals with endpoints and , as well as two solutions ( if ) in a more extended interval containing these points .also , a pair of solutions will become one coinciding solution at each tangential point .however , again , there will be no solution in the interval containing . in case of p - type orbits, there will be two solutions for any value of in the range of .we will revisit this assessment in conjunction with the algebraic method for attaining the solution ; additionally , detailed mathematical criteria will be given for the existence of rhz for s - type and p - type orbits .next we will focus on equal - star binary systems .detailed solutions for general binary systems ( i.e. , systems of stellar components with by default unequal masses , luminosities , and effective temperatures ) pertaining to both s - type and p - type orbits will be given in sect .3.3 . both subsections will be aimed at deriving rhzs ; see , e.g. , for general discussions on the role of rhzs for the attainment of habitability in star planet systems .however , strictly speaking , they will deal with identifying _ radiative habitable limits _ ( rhls )connected to a distinct value of noting that manifesting a rhz requires that the rhl for to be located _ completely outside _ of the rhl for with and appropriately paired .a summary about the existence and structure of the rhzs , encompassing the radiative chzs , ghzs , and ehzs , will be given in sect .3.4 ; this subsection will also convey cases where no rhzs exist due to the behavior of the rhls owing to the choices of and .now we focus on the special case of equal - star binary systems , i.e. , stars of identical recast luminosities , i.e. , . for theoretical main - sequence starsthis assumption also implies and ; this latter assumption about the stellar masses is relevant for the orbital stability constraint of system planets . with , eq .( 8) now constitutes a biquadratic equation that can be solved in a straightforward manner .the other coefficients are given as thus , the solution of eq .( 8) is given as with with known systems parameters , which are , , and , the function , describing the habitability limits for the binary system associated with inner limit and outer limit values derived for the sun ( see sect . 2 for details )can be obtained in a straightforward manner .owing to the system symmetry , the existence of s - type and p - type rhls can be identified by attaining the solutions of eq . ( 11 ) for and .first we examine the solutions of eq .( 11 ) for , i.e. , , which are given as this allows us to explore the existence of s - type rhls .the total number of solutions for ( if existing ) is four as expected , which can be ordered as . due to symmetryit is found that , which implies that thus , the condition for the existence of s - type rhls is given as next we examine the solutions of eq .( 11 ) for , i.e. , .this allows us to explore the existence of p - type rhls ; the latter implies two solutions of eq . (11 ) regardless of the value for . if the positive root of ( see eq .12 ) is considered , the solution is given as thus , the condition for the existence of p - type rhls is given as therefore , eqs .( 15 ) and ( 17 ) allow to identify the conditions for s - type and p - type rhls , respectively , for equal - star binary systems , which depend on the systems parameters , , and ; note that the equal signs in these equations carry little relevance . comparing eqs .( 15 ) and ( 17 ) also implies that the joint existence of s - type and p - type habitability in equal - star binary systems in circular orbits is not possible , irrespectively of the system parameters and the planetary orbital stability requirement ( see sect . 4 ) , noting that the latter imposes an additional constraint on habitability even when the rhz - related conditions are met .figure 2 depicts the borders of the s - type and p - type radiative habitable limits , i.e. , rhls , for different values of in regard to and .we now focus on obtaining solutions for our key equation , eq .( 8) , pertaining to s - type and p - type rhls for general binary systems , i.e. , . following , e.g. , and , the set of solutions for a fourth - order polynomial reads with with as a solution of the resolvent cubic equation with , , and given by eqs .( 9a ) to ( 9c ) ; here the term of in eqs .( 19b ) and ( 19c ) corresponds to the case of non - equal star binaries assumed in the following .note that for equal - star binaries a more straightforward method of solution is available ( see sect .the solutions given through eqs .( 18a ) to ( 18d ) , if existing , are ordered as for ; this order is also maintained for any other value of as identified in all model simulations pursued .although eq .( 20 ) has three possible solutions , there is only one appropriate choice for , named , because it is necessary to avoid that all obtained through eq .( 8) are of imaginary or conjugate complex value in cases where s - type or p - type rhls exist .the acceptable solution for is given as with the substitutions {r + \sqrt{d_3 } } \\t & = & \sqrt[3]{r - \sqrt{d_3 } } \\ d_3 & = & q^3 + r^2\end{aligned}\ ] ] and with and given as while noting that these sets of equation can be solved and appropriate values for can be obtained .the results will depend on the system parameters , , , and , as expected .next we describe the solutions for s - type and p - type rhls in more detail .it is important to recognize that a priori choices about the existence of s - type and p - type rhls are neither necessary nor possible as the existence of any of those rhls is determined by the fulfillment of well - defined mathematical conditions ; they will also be given in the following .an analysis of the possible solutions for eqs .( 18a ) to ( 18d ) shows that for s - type rhls valid solutions are obtained based on {r^2 + k^2 } \\big ( \cos \xi + i \sin \xi \big ) \\ t & = & \sqrt[6]{r^2 + k^2 } \\big ( \cos \xi - i \sin \xi \big)\end{aligned}\ ] ] with with given by eq .( 23b ) . therefore , the solution of the resolvent cubic equation , eq .( 20 ) , is given as {r^2 + k^2 } \cos \xi\ ] ] for the values of it is found that for the interval centered at , and exhibit negative values , whereas and exhibit positive values with . thus , and describe the rhl regarding star s1 , whereas and describe the rhl regarding star s2 ( see fig . 1 ) .conversely , for the interval centered at , and again exhibit negative values and and exhibit positive values . in this case , and describe the rhl for star s2 , whereas and describe the rhl for star s1 .no solutions are obtained in the vicinity of and , as expected .thus , for each angle in the range of the appropriate number of solutions is attained to describe s - type rhls . however , due to symmetry , solutions are only needed for .the existence of s - type rhls requires that because otherwise the two distinct s - type rhl contours about the two binary components would not be separated , which corresponds to the condition equation ( 28 ) can be rewritten to provide an expression based on the system parameters , , and defined through eqs .( 9a ) to ( 9c ) .it is found that with .the expression for is highly complicated ; however , it can be obtained based on eqs .( 21 ) to ( 24c ) by using , e.g. , mathematica in a straightforward manner . in conclusion , for s - type rhls to exist for the system parameters , , , and ,it is necessary that the relations ( 28 ) and ( 29 ) , which are equivalent , must be fulfilled for any angle of , though the evaluation can be limited to .furthermore , through analytical transformations it can be shown that the condition depicted as eqs .( 28 ) and ( 29 ) requires in the limiting case of equal - star binary systems , attained as , eq .( 30 ) can be simplified as this relationship is fulfilled in a trivial manner .an analysis of the possible solutions for eqs .( 18a ) to ( 18d ) also shows that for p - type rhls valid solutions require see eq .the detailed evaluation of this condition requires the evaluation of various sets of equations denoted as eqs .( 9a ) to ( 9c ) , ( 22a ) to ( 22c ) , and ( 23a ) to ( 23b ) ; see sect .3.1 and 3.3.1 . in terms of the solutions for p - type rhls it is found that for and , exhibit negative values and exhibit positive values , whereas and are undefined; they are also not needed for outlining p - type rhls . moreover , for the range of , exhibit negative values and exhibit positive values , noting that and remain undefined . for and ,removable singularities are identified , which can easily be fixed through interpolation taking values of for neighboring angles of . in summary , for each angle in the range of two values of ( i.e. , one positive and one negative value ) are identified allowing to determine p - type rhls . however , due to symmetry , solutions are only needed for .moreover , through analytical transformations it can be shown that the condition depicted as eq .( 32 ) can be rewritten as with , , and defined through eqs .( 9a ) to ( 9c ) with the left hand side of eq .( 33 ) representing . in conclusion , for p - type rhls to exist for the system parameters , , , and , it is necessary that the relations ( 32 ) and ( 33 ) , which are equivalent , must be fulfilled for any angle of , though the evaluation can be limited to . in the limiting case of equal - star binary systems , attained as , eq .( 33 ) can be simplified as this relationship , already given as eq .( 31 ) , is fulfilled in a trivial manner .the identification of the rhzs in binary systems requires the calculation of limits of habitable zones , i.e. , rhls , as pointed out in sect .the rhzs need to be established for values of with = 1 , 2 , 4 , 5 , and 6 ( see sect . 2 and table 2 ) , which are informed by model - dependent physical limits of habitability for the solar environment ( e.g. , * ? ? ?as part of the process , the parameters of need to be appropriately paired in terms of the inner and outer limits of habitability . for the chz the parameters need to be paired as , whereas for the ghz , they need to paired as .for the ehz , the parameters of need to paired as , considering that both the chz and the ghz shall be viewed as subdomains of the ehz . for s - type and p - type orbits , the radiative zones of habitability , which constitutes a circular region ( annulus ) around each star s1 and s2 ( s - type ) or both stars ( p - type ) can be determined as and respectively; see fig . 1 for coordinate information .here and describe the areas bordered by the rhls defined by and .the calculation of the extrema is applied to the angles and for the intervals and , respectively ; note that we assumed without loss of generality . in the s - type casethe calculation of the extrema pertaining to the rhz values is based on the angular coordinate instead of ; however , the angular coordinate is still needed for the calculation of as part of the overall approach toward identifying s - type and p - type habitability .figure 3 depicts examples of rhls and rhzs for different types of systems . in the s - type casethe rhls are bended toward the center of the system , whereas in the p - type case they are of notable elliptical shape .the rhzs always constitute circular annuli obtained through inspecting the appropriate minima and maxima of the rhls .the examples as depicted include s - type and p - type systems with separation distances of 0.5 au and 5.0 au , respectively .cases of both equal - star and non - equal star binaries are selected .the focus of this figure is the identification of the appropriate circular region ( i.e. , annulus ) for each case .the figure also indicates the portions within the and domains that are not part of the rhz annuli .next we determine the values for the extrema pertaining to following eqs .( 35 ) and ( 36 ) based on the solutions of eq .( 8) given as eqs .( 18a ) to ( 18d ) . in caseswhere four solutions exist , it is found that they are ordered as with and constituting negative values , and and constituting positive values . if the negative solutions for are permitted , it is sufficient for s - type orbits , both for star s1 and s2 , to only consider solutions for . for p - type orbits ,a more detailed assessment is required ( see below ) . for s - type orbits , regarding star s1 ,the extrema are obtained as and for star s2 , they are obtained as the size of each annulus for pairs is given as . also constitutes a generalization of hz with previously defined for single stars ( see sect . 2 and table 3 ) .likewise , constitutes a generalization of hz with 4 , 5 , and 6 . for p - type orbits ,the extrema are given as follows : we note that for the angle for no straightforward expression , an analytic expression for is deemed possible .however , it will be highly complicated and thus a numerical solution may be preferred . ]exists ; it is located in the interval .it can be found numerically as it is given by the angle where the minimum of occurs . in the special case of with and denoting the stellar primary and secondary , respectively , it is found that , whereas for it is found that ( see sect . 3.2 ) .there is also another complication in the identification of rhz p - type orbits .generally , it is required for the rhl of to be located completely outside of the rhl of , i.e. , this condition is however violated in some models , especially for relatively large values of as well as relatively small ratios of . in this casethe rhz for is nullified , a behavior that may occur for the pairings , , and , corresponding to the chz , ghz , and ehz , respectively . in this regardthe existence of the chz is in most jeopardy as constitutes the smallest bracket among the various kinds of hzs ( see table 2 ) .also note that if the ghz is nullified , the chz will be nullified as well considering that the chz ( if existing ) is entirely located within the ghz .likewise , if the ehz is nullified the existence of both the chz and the ghz will be nullified .detailed examples will be given in the application segment of this paper ; see sect .5.2 for details .however , this type of phenomenon does not occur for rhzs pertaining to s - type orbits . for equal - star binary systems , with the property of , expressions for rhz and rhz for s - type and p - type orbits can be obtained based on eqs .( 13 ) and ( 16 ) . for s - type orbitswe find it is also intriguing to explore the limits and .if these limits are met , it is found that moreover , in the limit of , the expressions for single star hzs regarding and are recovered , as expected , which are given as and , respectively .they are in agreement with the expressions previously obtained by , , , and others . results for p - type orbits can be obtained considering in this case we find based on the system parameters , , , , and . additionally , the requirement to avoid that the rhl for to be partially or completely located inside of the rhl for , see eq .( 40 ) , entails which allows to set constraints on the separation distance of the binary system noting that the values of , , , and are subject to distinct restrictions , particularly in case of main - sequence stars ( see tables 1 and 2 ) .depictions of the condition ( 45 ) for equal - star binaries for the pairings , , and , corresponding to the chz , ghz , and ehz , respectively , are given in fig .a similar expression is expected to hold for nonequal - star binaries , albeit it will be highly complicated .hence , for those systems a numerical assessment of and ( see eqs .39a and 39b ) may be preferred to accommodate condition ( 40 ) ; see sect .5.2.2 for additional information and data .a primary constraint on planetary habitability is that planets are required to exist in the hz for a sufficient amount of time allowing basic forms of life to emerge and develop . in order to adhere to this criterion, planetary orbital stability is required .there is a significant body of literature devoted to this topic , including studies of binary and multi - planetary systems , which often also consider aspects of stellar evolution ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?early studies of planetary orbital stability pertaining to planets in both s - type and p - type orbits demonstrated that planets can exist in systems of binary stars for 3000 binary periods .although these investigations considered relatively short integration times , determined upper and lower bounds of planetary orbital stability considering the orbital elements , semimajor axis and eccentricity , of the proposed binary stars . since this pioneering work, many additional studies have been performed .the foremost investigation extended the original study by a factor of 10 in integration times and an extended range of orbital elements .in addition , the nature of the bounding formula was derived and discussed using a more statistical framework . developed fitting formulae for both s - type and p - type planets in binary systems given as and respectively .these equations give the critical semimajor axis in units of the semimajor axis in case of s - type and p - type orbits .for an s - type orbit , the ratio , see eq .( 46 ) , conveys the _ upper limit _ of planetary orbital stability , whereas for a p - type orbit , the ratio , see eq . ( 47 ) , conveys the _ lower limit _ of planetary orbital stability .moreover , denotes the stellar mass ratio given as , where and constitute the two masses of the binary components with . equations ( 46 ) and ( 47 ) also contain the parameter functions and , which depend on the aforementioned mass ratio and the eccentricity of the stellar binary , . consideringthat this paper is solely aimed at stellar binaries in circular orbits ( i.e. , ) , it is found that .planetary orbital stability has been investigated by many authors using chaos indicators , such as the maximal lyapunov exponents ( mle ) , fast lyapunov indicator ( fli ) , and the mean exponential growth factor of nearby orbits ( megno ) , to name those commonly used ; see , e.g. , for details , recent applications , and references .these methods have also been used to characterize the transition from stable to unstable orbits within the framework of the circular and elliptical 3-body problems ; see , e.g. , , , and for details .previously , studied the stability of both s - type and p - type orbits in stellar binary systems , and deduced orbital stability limits for planets .these limits were found to depend on the mass ratio between the stellar components and the distance ratio between planetary and binary semimajor axes .this topic was revisited by , who used the concept of jacobi s integral and jacobi s constant to deduce stringent criteria for the stability of planetary orbits in binary systems for the special case of the coplanar circular restricted three - body problem .recently the planetary orbital stability was studied through the perspective of a chaos indicator , the mle by , e.g. , . from the use of a chaos indicator a cutoff value for the maximum lyapunov exponentwas determined as an additional stability criterion for s - type planets in the circular restricted 3-body problem .next we investigate s - type habitability for selected binary systems , including systems of equal and non - equal masses ( see table 5 ) .our main intent is to demonstrate the functionality of the method - as - proposed and , and a saturnian planet in a p - type orbit ; see for a detailed study of the system s habitability . ] ; an extensive parameter study will be given in sect .6 . figure 5 allows comparative insight into s - type habitability for selected binary systems , i.e. , systems with masses of and , ; the binary separation distances are chosen as 10 au and 20 au . for single stars of , the radiative chz extends from 1.049 to 1.498 au , and the radiative ghz extends from 0.927 to 1.831 au ; these values are slightly higher than those for g2 v stars given in table 3 owing to a minuscule difference in mass ( i.e. , 0.99 versus 1.0 ) . for an equal - mass binary system of 1.0 with a separation distance of 10 au ,the radiative chz and ghz extend from 1.056 to 1.511 au , and from 0.932 to 1.853 au , respectively , for each component .furthermore , the outer limit of the radiative ehz is altered from 2.64 to 2.70 au .however , there is now an upper orbital stability limit of 1.37 au imposed on each star .consequently , significant portions of the radiative chz and ghz are unavailable as circumstellar habitable regions .if the second star is placed at a distance of 20 au , the alteration of the radiative chz and ghz relative to single stars is very minor . specifically , for binary separations of 10 au and 20 au ,the sizes of the radiative ghz increase by 1.9% and 0.6% relative to the case of single stars .moreover , for the system with a separation distance of 20 au , the imposed orbital stability limit is found at 2.74 au ; consequently , the full extents of the radiative chz , ghz , and ehz are now available for planetary habitability .figure 5 also shows results for the pairs and . in case of a single 1.5 mass star ,the radiative chz and ghz extend from 1.88 to 2.49 au , and from 1.65 to 3.11 au , respectively , whereas the radiative ehz extends up to 4.61 au . in this type of system ,a secondary star of 1.0 placed at a separation distance of 10 au again modifies the extents of the radiative chz and ghz , which now extend from 1.89 to 2.49 au and from 1.66 to 3.14 au , respectively ; however , the planetary orbital stability limit now occrs at 1.56 au .therefore , the entire domains of the radiative chz , ghz , and ehz of the primary are unavailable as circumstellar habitable regions .if the secondary star is placed at a separation distance of 20 au , the radiative chz , ghz , and ehz of the primary star are again similar to those of a 1.5 mass star .however , the orbital stability limit is now found at a distance of 3.12 au from the primary ; therefore , the entire supplement of the radiative ehz , given by the bracket is now considered habitable . in summary , for potentially habitable s - type binaries , owing to the implied requirement of the relatively large separations of the stellar components , the effect ofthe stellar secondary on the extents of the rhzs is often minor , i.e. , about a few percent or less , with the biggest impact occurring in f - type systems .for most systems , the secondary s main influence on circumstellar habitability thus consists in limiting planetary orbital stability rather than offering significant augmentations of the rhzs , a feature most pronounced in close binaries .various sets of models have been pursued to examine p - type habitability ( see fig .6 ) . as examples we considered systems with masses of and and ; additionally , we also focused on models of and ( see tables 5 to 8 for details ) .the separation distances were chosen as 0.5 , 1.0 , and 2.0 au , respectively .our approach consists again of two steps .first , we explore the existence and extent of the radiative chzs , ghzs , and ehzs .subsequently , we consider the additional constraint of planetary orbital stability , which in the case of p - type orbits constitutes a lower limit ( see sect . 4 ) .our results can be summarized as follows . for systems with masses of ,the following behavior is found .for separation distances of 0.5 au , the inner limit ( i.e. , rhl ; see sect .3.4 ) of the radiative chz varies between 1.46 and 1.54 au as function of polar angle with 1.54 au to be considered as acceptable inner limit ; see eq .furthermore , the outer limit of the radiative chz varies between 2.10 and 2.16 au with 2.10 au as acceptable outer limit ; see eq .( 39b ) . in consideration of the orbital stability limit at 0.92 au ( see eq .45 ) , constituting an inner limit of orbital stability , the entire extent of the radiative chz is available as a circumbinary habitable region. the acceptable inner limit of the radiative ghz is given as 1.38 au , whereas the acceptable outer limit occurs at 2.58 au ; hence , the entire radiative ghz is again identified as habitable . for separation distances of 1.0 au ,the orbital stability limit is given at 1.83 au , which falls inside the domain of the radiative chz ranging from 1.69 to 2.06 au ; therefore , only about half of the radiative chz is available for circumbinary habitability , whereas the other half is not . since the radiative chz is fully embedded into the radiative ghz , only a fraction of the radiative ghz offers circumbinary habitability .however , the full extent of the supplementary radiative ehz , given by the bracket , with an acceptable outer limit of 3.83 au , offers habitability .we also considered models with binary separations of 2.0 au . in this case , the orbital stability limit is found at 3.66 au .therefore , both the radiative chz and ghz are unavailable for providing habitability ; the latter has an outer limit that varies between 2.39 and 3.05 au with 2.39 au as acceptable limit .the outer limit of the radiative ehz varies between 3.60 and 4.09 au with 3.60 au to be ruled acceptable as the conservatively selected ( i.e. , inner ) limit of the radiative ehz .hence , the entire radiative ehz is also not considered available for providing circumbinary habitability .most significantly , we also pursued case studies for systems of unequal distributions of mass , and by implication of unequal distributions of luminosity as , for example , the system and .according to the mass luminosity relationship for main - sequence stars , it is found that a 1.5 star possesses a luminosity about 3.5 times higher than a 1.0 star ; a similar factor of difference exists for the recast stellar luminosity ( see tables 4 and 5 ) .thus , the combined luminosity of the system is considerably higher than the combined luminosity of the system , as expected . on the other hand ,following the work by and , an unequal distribution of stellar mass , i.e. , a smaller value of ( see sect . 4 ) , entails a smaller orbital stability limit .since it constitutes a lower limit , i.e. , positioned more closely to the stellar system , it offers larger windows of opportunity " for planets in the rhzs ( if existing ) to be orbitally stable .results for separation distances of 0.5 , 1.0 , and 2.0 au are given in fig .6 . for a binary separation of 0.5au , it is found that both the radiative chz and ghz exist , and habitability in these domains is fully permitted according to the planetary orbital stability constraint , although the width of the chz is relatively small .the chz extents from 2.14 to 2.28 au , whereas the ghz extents from 1.91 to 2.90 au ; the orbital stability limit is given at 0.66 au . in this type of system , there are extreme variations for the inner and outer limits of both the radiative chz and ghz . for example, the inner rhl for the chz varies between 1.65 and 2.14 au , whereas its outer rhl varies between 2.28 and 2.76 au as function of polar angle .there is also a considerably large domain of the supplementary portion of the radiative ehz , which has an outer limit that varies between 4.40 and 4.89 au .detailed depictions of the variations of the inner and outer limits of the rhzs for the various systems are given in fig .this figure indicates relatively small bars of variations for equal - mass systems such as with small separation distances as , e.g. , au .however , large bars of variations are obtained for non - equal mass binaries or for equal - mass binaries with large separation distances as , e.g. , au . in systems with a binary separation of 1.0 au , the radiative chz is nullified ; note that the orbital stability limit in this system is given at 1.32 au .the reason for the disallowance of the chz is that the rhl for , which is at 2.38 au , is located inside of the rhl for , given as 2.05 au . the same criterion ( see eq .40 ) also leads to a relatively small width of the radiative ghz , which extends between 2.16 and 2.66 au . at a binary separation of 2.0 au, the situation is even more drastic as both the radiative chz and ghz are disallowed .the only type of circumbinary habitable region remaining is that provided by the relatively large supplementary portion of the radiative ehz given by the bracket . in this zone, habitable planets are expected to be possible as their existence would be consistent with the planetary orbital stability constraint .next we explore the existence of p - type rhzs , both for equal - mass and non - equal mass binaries , in a more systematic manner through the means of _ numerical experiments_. specifically , we pursue sets of model calculations with the binary separation distance considered as an independent variable ; see table 9 for results .the stellar masses are altered between and in increments of ( see table 5 ) .results are given for the pairings ( chz ) , ( ghz ) , and ( ehz ) .note that for equal - mass binaries , it is sufficient to solve eq .( 45 ) , whereas for general binary systems a more thorough assessment is needed to satisfy relation ( 40 ) .this approach allows us to explore the maximum binary separation distances , which are upper limits for permitting rhzs for each case ( i.e. , combination of binary masses and choice of chz , ghz , or ehz ) .generally , it is found that for any binary system , the greatest permissible binary separation distance is attained for the ehz , and furthermore that value - as - attained is greater for the ghz than for the chz ; these findings are as expected .for example , for the system , the expiration distance for the radiative ehz is given as 4.25 au , whereas for the radiative ghz and chz , the distances are given as 2.57 and 1.64 au , respectively ( see table 9 ) .as another example , the system and , the expiration distances for the radiative ehz , ghz , and chz are given as 3.80 , 1.93 , and 0.96 au , respectively .it is also intriguing to compare results for stellar pairs as , e.g. , to stellar pairs such as and .for the system of , it is found that the radiative chz and ghz are nullified as defined by the limit of validity of eq .( 45 ) at binary separation distances of 1.64 and 2.57 au , respectively ( see table 9 ) . at those binary separations , the distances of the vanishing rhz - chz and rhz - ghz ( as measured from the geometrical center of the system; see fig . 1 )are given as 1.95 and 2.25 au , respectively . in comparison ,the limits of planetary orbital stability ( to be interpreted as lower limits ) are identified as 3.00 and 4.71 au , respectively .thus , we conclude that for equal - mass binary systems such as , habitability for widely spaced binaries is lost due to the lack of orbital stability already at binary separations where the circumbinary chz - rhzs and ghz - rhzs are still in place .the same type of study has been pursued for systems with highly unequal intrabinary distributions of masses and , by implication , stellar luminosities as , e.g. , and . in this caseit is found that the radiative chz and ghz vanish at binary separation distances of 0.65 and 1.55 au , respectively .furthermore , the distances of the vanishing radiative chz and ghz ( as measured from the system center , see fig .1 ) are given as 2.21 and 2.43 au , respectively ( see table 9 ) .the respective limits of planetary orbital stability are identified as 0.85 and 2.04 au .thus , for this type of system it is found that habitability is lost due to the vanishing rhzs , even though circumstellar habitability would still be permitted according to the planetary orbital stability criterion .the fact that circumbinary habitability is lost already for systems of relatively small binary separations is a consequence of the extreme radiative imbalance caused by the highly unequal distribution of stellar luminosities , which determine the circumbinary rhls .radiative imbalance within binary systems may cause the rhl for to be partially or completely located inside of the rhl for ; see sect .in fact , when the pairs and and are compared to one another , it is found that although the unequal - mass binary system has almost twice the combined stellar luminosity of the equal - mass binary system ( i.e. , 3.85 versus 2.0 ) , it still possesses much narrower chz , ghz , and ehz rhzs .in fact , it is found that the condition expressed as eq .( 40 ) is most readily met in cases of equal - mass binary systems of relatively small separation distances and mostly violated in systems of relatively large separation distances and/or unequal distributions of masses and , by implication , luminosities .various examples have been depicted in fig . 7 ; see discussion in sect .5.2.1 . in summary ,although an unequal distribution of stellar masses within binary systems is identified as advantageous for facilitating planetary orbital stability , in consideration of that lower stability limits for p - type orbits occur for smaller mass ratios ( see sect . 4 ) , the situation for the existence of the rhzs is much less ideal , even for systems where the stellar primary is highly luminous owing to the behavior of the rhls . in this regard ,the radiative chz is in most jeopardy as constitutes the smallest bracket among the various kinds of hzs ( see table 2 ) .more fortunate scenarios are expected to occur for the radiative ghz and ehz , with the brackets given as and , respectively ; they are characterized by considerably larger widths , especially in case of equal - mass systems of stars with relatively high luminosities .another aspect of this study is to provide an appropriate classification of habitability applicable to general binary systems .previously , introduced the terminology of s - type and p - type orbits for system planets , which is now widely used by the orbital stability , planetary , and the astrobiology science communities .evidently , besides the assessment of orbital stability behaviors , these terms are also appropriate for classifying binary system rhzs , if existing . however ,following previous investigations ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , as well as the results of the present work , the spatial domain of s - type and p - type habitability depicted by the rhzs is often adversely affected , and in some cases even nullified , by the requirement that system planets must be orbitally stable .thus , if the available extent of the s - type and p - type rhzs for the manifestation of habitability is truncated owing to the additional constraint of planetary orbital stability , these zones shall be referred to as st - type and pt - type , respectively , in the following .detailed results are given in table 10 , which provides an extensive summary of p , pt , st , and s - type habitability for both equal - mass and non - equal mass binary systems .the stellar masses are varied between and in increments of amounting to a total of 15 combinations .table 10 features the results for the pairings ( chz ) , ( ghz ) , and ( ehz ) . in principle, it is found that with the secondary taken as fixed the higher the mass and , by implication , the luminosity of the stellar primary , the larger values are obtained for p , pt , st , and s - type habitability . additionally , larger values for the limits of p ,pt , st , and s - type habitability are obtained regarding the ghz relative to the chz , as expected .the largest values are obtained for pt and s - type habitability for the ehz ; in this regard , there is no change for p and st - type habitability relative to the ghz since both types of hzs are based on the same inner bracket value of ( see above ) .the results of table 10 are in line with the previously discussed findings about highly unequal intrabinary distributions of masses and , by implication , stellar luminosities as , e.g. , and or and compared to the case of . for systems of highly unequal mass distributions , the domains of p - type and pt - type habitability are typically relatively small as the rhl for crosses the rhl for in relatively close proximity to the primary , such allowing only small distance ranges to exhibit p / pt - type habitability .it is also found that in seven cases for the chz , as well as two cases for the ghz , the rhzs expire prior to the truncation of habitability due to the planetary orbital stability requirement . in those cases ,only p - type habitability exists ; no pt - type habitability is found as the orbital stability constraint bears no relevance .figure 8 and 9 show various combinations of equal - mass and nonequal - mass binary systems ; they all show numerous similarities , though the spatial scales are noticeably different as they are defined through the stellar luminosities . if equal - mass binary systems are considered , taking and as examples , the extent of both p and pt - type habitability increase with increasing stellar mass or luminosity . the distances for p and pt habitabilityare found to almost coincide indicating that the orbital stability constraint affects the inner and outer limit of p - type habitability in about the same manner .furthermore , the inner and outer limits of both s and st habitability are shifted to larger distances from each stellar component for stars of higher luminosity , as expected . moreover , for stars of higher luminosity , there is a larger spatial domain where s - type habitability is truncated due to the additional constraint of planetary orbital stability . for equal - mass systems of ,st and s - type habitability is identified at distances of 6.85 and 13.46 au , whereas for , st and s - type habitability is identified at 1.38 and 3.00 au .figure 9 depicts two selected cases of nonequal - mass binary systems . in both casesthe stellar primary is chosen as 1.0 , whereas the stellar secondary is chosen as 0.75 and 0.5 , respectively ; the corresponding stellar luminosities of the secondaries are 0.357 and 0.045 , respectively ( see table 5 ) .a reduced luminosity of the secondary binary component adversely affects the extent of the rhz , as expected .interestingly , a reduction of mass for the stellar secondary has a nontrivial impact on the orbital stability domains , which considerably depend on the mass ratio ( see eqs . 46 and 47 ) .if is reduced , the permissible stability domain for p - type orbits is increased , whereas the permissible stability domain for s - type orbits is decreased .thus , the assessment of s , p , st , and pt habitability for nonequal - mass binaries requires a detailed computational analysis . in the example of fig . 9 , in regard to , s , p , st , and pt habitability occurs at distances of 0.82 , 1.16 , 6.18 , and 12.19 au , respectively , and for , s , p , st , and pt habitability occurs at distances of 0.94 , 1.01 , 5.50 , and 10.86 au , respectively .the respective differences are notable , but not drastic ; more pronounced differences occur for systems of more luminous stars ( see table 10 ) .note that both figs .8 and 9 refer to habitability assessments pertaining to the ghz ( see table 2 ) .they are given to exemplarily showcase the structure , extent and location of the s - type and p - type rhzs as well as the relevance of the orbital stability limits for both s - type and p - type habitability , thus allowing us to uniquely identify the spatial domains of s , p , st , and pt - type habitability for each system .computationally , the occurrence of p , pt , st , and s - type habitability can be identified as follows . for sufficiently small binary separations , it is found that both the inner and outer limit of the p - type rhz are located beyond of the p - type orbital stability limit ( see eq .47 ) , which constitutes a lower limit of planetary orbital stability ( see sect . 4 ) . if the distance of binary separation is increased , the inner and outer limits of the p - type rhz decrease , whereas the p - type orbital stability limit increases .thus , starting at a certain value of , only a fraction of the width of the p - type rhz will be available for providing habitability ; in this case , pt - type habitability is attained .if the binary separation is further increased , the entire width of the p - type rhz will be unavailable for providing habitability because habitability would be incompatible with the orbital stability constraint .eventually , the p - type rhz expires ; see also information provided in table 9 . for mid - sized values of the binary separation distance ,the s - type rhz is encountered to exist , but it is unable to provide habitability because of the s - type orbital stability limit ( see eq .46 ) , which constitutes an upper limit of planetary orbital stability ( see sect . 4 ) .the s - type rhzs continue to exist further out ; note that the inner and outer limits of the s - type rhzs essential continue to run parallel as function of the binary separation distance for most systems .if the binary separation distance is further increased , again as part of our numerical experiment , the s - type orbital stability will increase and will cross the inner limit of the s - type rhz ; in this case , st - type habitability is encountered as some , but not all of the width of the s - type rhz is available for providing habitability . eventually , for sufficiently large binary separations , the s - type orbital stability limit also crosses the outer limit of the s - type rhz . in this case, the full width of the s - type rhz is available for facilitating habitability , consistent with the definition of s - type habitability .another application is displayed in fig .it shows for a given spectral type of equal - star binaries the stellar separation distances for which chzs , ghzs , and ehzs are able to exist .the chzs , ghzs , and ehzs can be either s or st - type , on one hand , or p or pt - type , on the other hand , to qualify for depiction .the results are given as function of stellar spectral type , for stars between spectral type f0 to m0 .the figure shows that p / pt - type habitable regions are able to exist for a relatively large range of separation distances in case of relatively luminous stars ( i.e. , spectral type f ) , but only for a relatively small range of separation distances for lesser luminous stars ( i.e. , spectral types k and m ) .regarding s / st - type habitable regions the situation is reversed .figure 10 also indicates a notable domain of binary separations where no habitable regions are found owing to the lack of rhzs , the lack of planetary orbital stability , or both . moreover , no domain of binary separation distances is identified where s / st - type and p / pt - type habitable regions overlap .in this study we present a new method about a comprehensive assessment of s - type and p - type habitability in stellar binary systems .p - type orbits occur when the planet orbits both binary components , whereas in case of s - type orbits the planet orbits only one of the binary components with the second component considered a perturbator .an important characteristic of the new method is that it combines the orbital stability constraint for a system planet with the necessity that a habitable region given by the stellar radiative energy fluxes ( radiative habitable zone " ) must exist .the requirement to combine these two properties has also been recognized in previous studies ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?another element of the present study is to introduce a habitability classification regarding stellar binary systems , consisting of habitability types s , p , st , and pt . this type of classification also considers whether or not s - type and p - type radiative habitable zones are reduced in size due to the additional constraint of planetary orbital stability . in summary ,five different cases were identified , which are : s - type and p - type habitability provided by the full extent of the rhz ; habitability , where the rhz is truncated by the additional constraint of planetary orbital stability ( labelled as st and pt - type , respectively ) ; and cases of no habitability at all .this classification scheme can be applied to both equal - mass and non - equal mass binary systems , as well as to systems with binaries in elliptical orbits , which will be the focus of the forthcoming paper ii of this series . as part of the current study a significant array of resultsare given for a notable range of main - sequence stars , which are of both observational and theoretical interest .a key aspect of the proposed method is the introduction of a combined algebraic formalism for the assessment of both s - type and p - type habitability ; in particular , mathematical criteria are presented allowing to determine for which systems s - type and p - type rhzs are realized . in this regard ,a priori choices about the presence of s - type and p - type rhzs are neither necessary nor possible as the existence of s - type as well as p - type rhzs is proliferated through well - defined mathematical conditions pertaining to the underlying fourth - order algebraic equation .the coefficients of the polynomial are given by the binary separation distance ( ) , the solar system - based parameter for the limit of habitability ( ) , and the modified values for the luminosities ( , ) of the stellar binary components , referred to as recast stellar luminosities . regarding the binary system habitable zone , we consider conservative , general and extended zones of habitability , noting that their inner and outer limits are informed by previous solar system investigations ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) . in our segment of applications, we examined the existence of habitable s - type orbits for selected examples .we found that regarding the rhzs , owing to the typically relatively large separation of the stellar components , the effect of the stellar secondary on the extents of the rhzs is usually very minor .the secondary s main influence on circumstellar habitability consists in imposing restrictions regarding planetary orbital stability implemented as an upper stability limit around each stellar component , which often truncates or nullifies s - type planetary habitability . in the framework of our study, we specifically considered the radiative ehz , which is most outwardly extented ( i.e. , up to 2.4 au in case of the sun ) .it was found that this kind of zone is most affected by the limitation of planetary orbital stability as it is located closest to the secondary stellar component .furthermore , we also examined the existence of habitable p - type orbits . in this case , relatively complicated scenarios emerge .in general , it was found that the best prospects for circumbinary habitability emerge for ( 1 ) systems with stellar components of relatively high luminosities ( no surprise here ! ) , ( 2 ) systems where the stellar luminosities are relatively similar ( for main - sequence stars , as implied by their stellar masses ) , and ( 3 ) systems of relatively small binary separations .if conditions ( 2 ) or ( 3 ) are not met , it may occur that the outer rhl is located inside of the inner rhl , thus nullifying the rhz irrespectively of planetary orbital stability considerations . on the other hand ,an unequal intrabinary distribution of masses entails a lower limit of planetary orbital stability ( i.e. , positioned closer to the binary system ) thus implying an enhanced opportunity for circumbinary habitability .however , this aspect is of lesser significance for most systems compared to the restrictions for the rhzs due to the imbalance given by the stellar luminosities . various applications in this study concern stars of masses between 0.75 and 1.5 .this approach is motivated to unequivocally demonstrate the effects of stellar binarity on the extent and structure of circumstellar habitability , which is most pronounced for massive , i.e. , highly luminous stars .nonetheless , most stars in binaries are expected to be low - mass stars , i.e. , stars of spectral types k and m , owing to the skewness of the galactic initial mass function ( e.g. , * ? ? ? * ; * ? ? ?* ; * ? ? ?for example , we compared pairs of systems given by ( 1.0 , 1.0 ) and ( 1.5 , 0.5 ) . obviously , the overall luminosity is by far greatest in the ( 1.5 , 0.5 ) system following the mass luminosity relationship ,i.e. , ( e.g. , * ? ? ?however , this system is found to be the highly unfavorable for the facilitation of circumbinary habitability .particularly , it is found that the p - type ghz in the ( 1.0 , 1.0 ) system extends to 0.91 au , whereas it extends only to 0.65 au in the ( 1.5 , 0.5 ) system .furthermore , smaller spatial extents are identified for p - type chzs , as this type of hz is in highest jeopardy owing to the relative small bracket compared to the bracket for ghzs ( see table 2 ) .in fact , a considerable number of systems do not offer chzs at all , which again is a consequence of the radiative imbalance in those systems . also , the nullification of chzs in binary systems is most likely to occur in systems of relatively large separation distance .in contrast , the best opportunities for facilitating circumbinary habitability is given in the context of ehzs , as expected .future work will deal with a significant augmentation of our method to other systems , including systems with binary components in elliptical orbits ( see paper ii ) .this will allow us to compare applications of our method , including results for individual systems , to other findings in the literature .we also expect our method to be applicable to general binary systems with main - sequence stars as well as to systems containing evolved stars ; this latter effort is motivated by observational evidence and supporting theoretical efforts indicating that planets are also able to exist around stars that have left the main - sequence ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?particularly , it is highly desirable to augment our method to systems of higher order , as motivated by the steady progress in theory as well as ongoing and future observational discoveries of exosolar planetary systems .this work has been supported in part by the seti institute .the author acknowledges comments by b. quarles and z. e. musielak as well as assistance with computer graphics by s. sato , s. satyal , and m. sosebee .the paper also benefited from detailed comments by an anonymous referee .this study made use of the software applications fortran , mathematica , and matlab .the author anticipates the development of a black box code , called binhab , to be hosted at the university of texas at arlington , which will allow the assessment of habitability in binary systems based on the developed method .haghighipour , n. , dvorak , r. , & pilat - lohinger , e. 2010 , in planets in binary systems , ed .n. haghighipour , astrophysics and space science library , vol .366 ( new york : springer science + business media ) , p. 285lccccc f0 & 7178 & 1.62 & 6.255 & 1.145 & 1.60 + f2 & 6909 & 1.48 & 4.481 & 1.113 & 1.52 + f5 & 6528 & 1.40 & 3.196 & 1.072 & 1.40 + f8 & 6160 & 1.20 & 1.862 & 1.037 & 1.19 + g0 & 5943 & 1.12 & 1.405 & 1.019 & 1.05 + g2 & 5811 & 1.08 & 1.194 & 1.009 & 0.99 + g5 & 5657 & 0.95 & 0.830 & 0.997 & 0.91 + g8 & 5486 & 0.91 & 0.673 & 0.985 & 0.84 + k0 & 5282 & 0.83 & 0.481 & 0.971 & 0.79 + k2 & 5055 & 0.75 & 0.330 & 0.957 & 0.74 + k5 & 4487 & 0.64 & 0.149 & 0.926 & 0.67 + k8 & 4006 & 0.53 & 0.066 & 0.905 & 0.58 + m0 & 3850 & 0.48 & 0.045 & 0.900 & 0.51 + lccc 1 & 0.84 & ghz / ehz & runaway greenhouse effect + 2 & 0.95 & chz & start of water loss + 3 & 1.00 & ... & earth - equivalent position + 4 & 1.37 & chz & first co condensation + 5 & 1.67 & ghz & maximum greenhouse effect , no clouds + 6 & 2.40 & ehz & maximum greenhouse effect , 100% clouds + lcccccc f0 & 1.98 & 2.25 & 2.33 & 2.91 & 3.66 & 5.49 + f2 & 1.70 & 1.93 & 2.00 & 2.54 & 3.18 & 4.72 + f5 & 1.46 & 1.65 & 1.72 & 2.24 & 2.78 & 4.08 + f8 & 1.13 & 1.28 & 1.34 & 1.78 & 2.19 & 3.19 + g0 & 0.99 & 1.12 & 1.17 & 1.58 & 1.94 & 2.80 + g2 & 0.91 & 1.03 & 1.09 & 1.48 & 1.81 & 2.61 + g5 & 0.77 & 0.87 & 0.91 & 1.25 & 1.53 & 2.19 + g8 & 0.69 & 0.78 & 0.83 & 1.15 & 1.39 & 1.99 + k0 & 0.59 & 0.67 & 0.71 & 0.99 & 1.20 & 1.71 + k2 & 0.49 & 0.55 & 0.59 & 0.84 & 1.01 & 1.43 + k5 & 0.34 & 0.38 & 0.40 & 0.59 & 0.71 & 0.99 + k8 & 0.22 & 0.25 & 0.27 & 0.41 & 0.49 & 0.67 + m0 & 0.19 & 0.21 & 0.23 & 0.35 & 0.41 & 0.56 + lccccccc f0 & 7178 & 5.545 & 5.625 & 5.464 & 4.509 & 4.801 & 5.223 + f2 & 6909 & 4.075 & 4.121 & 4.027 & 3.445 & 3.621 & 3.873 + f5 & 6528 & 3.005 & 3.027 & 2.981 & 2.681 & 2.770 & 2.897 + f8 & 6160 & 1.802 & 1.809 & 1.794 & 1.692 & 1.722 & 1.764 + g0 & 5943 & 1.382 & 1.384 & 1.379 & 1.337 & 1.349 & 1.366 + g2 & 5811 & 1.185 & 1.186 & 1.184 & 1.168 & 1.172 & 1.179 + g5 & 5657 & 0.832 & 0.832 & 0.832 & 0.837 & 0.836 & 0.834 + g8 & 5486 & 0.683 & 0.682 & 0.684 & 0.703 & 0.697 & 0.690 + k0 & 5282 & 0.494 & 0.493 & 0.496 & 0.523 & 0.515 & 0.505 + k2 & 5055 & 0.343 & 0.341 & 0.345 & 0.374 & 0.366 & 0.354 + k5 & 4487 & 0.150 & 0.149 & 0.151 & 0.177 & 0.170 & 0.161 + k8 & 4006 & 0.071 & 0.071 & 0.072 & 0.089 & 0.085 & 0.079 + m0 & 3850 & 0.050 & 0.049 & 0.051 & 0.064 & 0.060 & 0.055 + lccccc 1.50 & f2 v & 6842 & 4.233 & 3.830 & 1.96 + 1.25 & f7 v & 6256 & 2.173 & 2.078 & 1.44 + 1.00 & g2 v & 5833 & 1.228 & 1.216 & 1.10 + 0.75 & k2 v & 5100 & 0.357 & 0.372 & 0.61 + 0.50 & m0 v & 3857 & 0.045 & 0.050 & 0.22 + lcccc + 1 & 0.84 & 1.29 & 1.38 & 0.92 + 2 & 0.95 & 1.46 & 1.54 & 0.92 + 3 & 1.00 & 1.54 & 1.62 & 0.92 + 4 & 1.37 & 2.10 & 2.16 & 0.92 + 5 & 1.67 & 2.58 & 2.62 & 0.92 + 6 & 2.40 & 3.72 & 3.76 & 0.92 + + 1 & 0.84 & 1.19 & 1.52 & 0.79 + 2 & 0.95 & 1.36 & 1.69 & 0.79 + 3 & 1.00 & 1.45 & 1.79 & 0.79 + 4 & 1.37 & 1.96 & 2.28 & 0.79 + 5 & 1.67 & 2.43 & 2.76 & 0.79 + 6 & 2.40 & 3.58 & 3.91 & 0.79 + + 1 & 0.84 & 1.42 & 1.91 & 0.66 + 2 & 0.95 & 1.65 & 2.14 & 0.66 + 3 & 1.00 & 1.73 & 2.22 & 0.66 + 4 & 1.37 & 2.28 & 2.76 & 0.66 + 5 & 1.67 & 2.90 & 3.38 & 0.66 + 6 & 2.40 & 4.40 & 4.89 & 0.66 + lcccc + 1 & 0.84 & 1.21 & 1.54 & 1.83 + 2 & 0.95 & 1.40 & 1.69 & 1.83 + 3 & 1.00 & 1.48 & 1.76 & 1.83 + 4 & 1.37 & 2.06 & 2.28 & 1.83 + 5 & 1.67 & 2.54 & 2.72 & 1.83 + 6 & 2.40 & 3.70 & 3.83 & 1.83 + + 1 & 0.84 & 1.11 & 1.75 & 1.57 + 2 & 0.95 & 1.29 & 1.92 & 1.57 + 3 & 1.00 & 1.38 & 2.02 & 1.57 + 4 & 1.37 & 1.90 & 2.49 & 1.57 + 5 & 1.67 & 2.35 & 2.96 & 1.57 + 6 & 2.40 & 3.46 & 4.12 & 1.57 + + 1 & 0.84 & 1.21 & 2.16 & 1.32 + 2 & 0.95 & 1.43 & 2.38 & 1.32 + 3 & 1.00 & 1.51 & 2.47 & 1.32 + 4 & 1.37 & 2.05 & 3.00 & 1.32 + 5 & 1.67 & 2.66 & 3.62 & 1.32 + 6 & 2.40 & 4.17 & 5.13 & 1.32 + lcccc + 1 & 0.84 & 0.85 & 1.98 & 3.66 + 2 & 0.95 & 1.10 & 2.11 & 3.66 + 3 & 1.00 & 1.20 & 2.18 & 3.66 + 4 & 1.37 & 1.87 & 2.64 & 3.66 + 5 & 1.67 & 2.39 & 3.05 & 3.66 + 6 & 2.40 & 3.60 & 4.09 & 3.66 + + 1 & 0.84 & 0.70 & 2.23 & 3.15 + 2 & 0.95 & 0.95 & 2.40 & 3.15 + 3 & 1.00 & 1.07 & 2.50 & 3.15 + 4 & 1.37 & 1.69 & 2.95 & 3.15 + 5 & 1.67 & 2.18 & 3.42 & 3.15 + 6 & 2.40 & 3.32 & 4.55 & 3.15 + + 1 & 0.84 & 0.83 & 2.66 & 2.63 + 2 & 0.95 & 1.09 & 2.88 & 2.63 + 3 & 1.00 & 1.18 & 2.96 & 2.63 + 4 & 1.37 & 1.74 & 3.50 & 2.63 + 5 & 1.67 & 2.28 & 4.12 & 2.63 + 6 & 2.40 & 3.71 & 5.63 & 2.63 + lccccc + 1.5 & 2.47 & 1.90 & 1.50 & 0.87 & 0.65 + 1.25 & ... & 2.01 & 1.61 & 0.96 & 0.58 + 1.0 & ... & ... & 1.64 & 1.01 & 0.53 + 0.75 & ... & ... & ... & 1.00 & 0.47 + 0.5 & ... & ... & ... & ... & 0.43 + + 1.5 & 4.25 & 3.48 & 2.95 & 2.14 & 1.55 + 1.25 & ... & 3.26 & 2.74 & 1.93 & 1.27 + 1.0 & ... & ... & 2.57 & 1.77 & 1.10 + 0.75 & ... & ... & ... & 1.50 & 0.84 + 0.5 & ... & ... & ... & ... & 0.60 + + 1.5 & 7.38 & 7.15 & 5.96 & 4.43 & 3.29 + 1.25 & ... & 5.50 & 5.48 & 3.80 & 2.64 + 1.0 & ... & ... & 4.25 & 3.41 & 2.19 + 0.75 & ... & ... & ... & 2.40 & 1.55 + 0.5 & ... & ... & ... & ... & 0.92 + lcccccccccccccccccccc + 1.5 & 1.63 & 1.86 & 13.89 & 18.31 & 1.64 & 1.72 & 13.01 & 17.18 & 1.50 & ... & 12.11 & 16.02 & 0.87 & ... & 11.16 & 14.79 & 0.65 & ... & 10.19 & 13.50 + 1.25 & ... & ... & ... & ... & 1.19 & 1.42 & 10.17 & 14.01 & 1.23 & 1.35 & 9.41 & 12.98 & 0.96 & ... & 8.59 & 11.88 & 0.58 & ... & 7.75 & 10.73 + 1.0 & ... & ... & ... & ... & ... & ... & ... & ... & 0.91 & 1.12 & 7.75 & 11.01 & 0.93 & 0.95 & 7.00 & 9.98 & 0.53 & ... & 6.22 & 8.89 + 0.75 & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & 0.50 & 0.65 & 4.26 & 6.37 & 0.47 & ... & 3.70 & 5.57 + 0.5 & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & 0.18 & 0.26 & 1.55 & 2.53 + + 1.5 & 1.43 & 2.32 & 12.22 & 22.85 & 1.44 & 2.14 & 11.44 & 21.44 & 1.53 & 2.06 & 10.65 & 19.99 & 1.70 & 1.87 & 9.82 & 18.45 & 1.55 & ... & 8.96 & 16.85 + 1.25 & ... & ... & ... & ... & 1.05 & 1.75 & 8.97 & 17.27 & 1.08 & 1.65 & 8.30 & 15.99 & 1.16 & 1.45 & 7.58 & 14.64 & 1.27 & ... & 6.83 & 13.22 + 1.0 & ... & ... & ... & ... & ... & ... & ... & ... & 0.80 & 1.36 & 6.85 & 13.46 & 0.82 & 1.16 & 6.18 & 12.19 & 0.94 & 1.01 & 5.50 & 10.86 + 0.75 & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & 0.44 & 0.78 & 3.77 & 7.69 & 0.46 & 0.60 & 3.28 & 6.72 + 0.5 & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & 0.16 & 0.30 & 1.38 & 3.00 + + 1.5 & 1.43 & 3.44 & 12.22 & 33.89 & 1.44 & 3.14 & 11.44 & 31.80 & 1.53 & 3.02 & 10.65 & 29.64 & 1.70 & 2.75 & 9.82 & 27.36 & 2.03 & 2.63 & 8.96 & 24.99 + 1.25 & ... & ... & ... & ... & 1.05 & 2.55 & 8.97 & 25.18 & 1.08 & 2.40 & 8.30 & 23.32 & 1.16 & 2.10 & 7.58 & 21.34 & 1.37 & 1.92 & 6.83 & 19.28 + 1.0 & ... & ... & ... & ... & ... & ... & ... & ... & 0.80 & 1.97 & 6.85 & 19.41 & 0.82 & 1.66 & 6.18 & 17.58 & 0.94 & 1.45 & 5.50 & 15.66 + 0.75 & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & 0.44 & 1.10 & 3.77 & 10.90 & 0.46 & 0.85 & 3.28 & 9.51 + 0.5 & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & 0.16 & 0.42 & 1.38 & 4.14 + | a comprehensive approach is provided to the study of both s - type and p - type habitability in stellar binary systems , which in principle can also be expanded to systems of higher order . p - type orbits occur when the planet orbits both binary components , whereas in case of s - type orbits the planet orbits only one of the binary components with the second component considered a perturbator . the selected approach encapsulates a variety of different aspects , which include : ( 1 ) the consideration of a joint constraint including orbital stability and a habitable region for a putative system planet through the stellar radiative energy fluxes ( radiative habitable zone " ; rhz ) needs to be met . ( 2 ) the treatment of conservative , general and extended zones of habitability for the various systems as defined for the solar system and beyond . ( 3 ) the providing of a combined formalism for the assessment of both s - type and p - type habitability ; in particular , mathematical criteria are presented for which kind of system s - type and p - type habitability is realized . ( 4 ) applications of the attained theoretical approach to standard ( theoretical ) main - sequence stars . in principle , five different cases of habitability are identified , which are : s - type and p - type habitability provided by the full extent of the rhzs ; habitability , where the rhzs are truncated by the additional constraint of planetary orbital stability ( referred to as st and pt - type , respectively ) ; and cases of no habitability at all . regarding the treatment of planetary orbital stability , we utilize the formulae of holman & wiegert ( 1999 ) [ aj 117 , 621 ] as also used in previous studies . in this work we focus on binary systems in circular orbits . future applications will also consider binary systems in elliptical orbits and provide thorough comparisons to other methods and results given in the literature . |
despite the terrifying fact that numerous operational fission or even thermonuclear bombs exist on our planet , there is a great interest in the basic principles and physics underlying the concept of nuclear weapons . during the last decades, an increasing number of details concerning the design of the gadget , the first atomic bomb ignited in the trinity test , have been revealed . in the present work ,the first man - made nuclear explosion is modeled in a simplified but instructive manner , which nevertheless leads to realistic and even rather accurate quantitative results .the calculations are based on a point model in the analogous sense of nuclear reactor point models or neutron point kinetics , where the whole structure of the reactor core is averaged out in an effective way , removing the spatial structure of the object under study which therefore becomes a structureless point , but still retaining the basic physical features in the time domain .below , we will model a exploding plutonium core including its surrounding matter both as a sphere or a ball with a time - dependent radius , where the expansion is driven by ( a gradient of ) the radiation pressure produced by the energy released by the nuclear chain reaction ( see also fig .( [ fig_model ] ) ) . on the one hand , the time - dependent density of fissile atoms and other crucial quantities are assumed to be spatially constant inside the sphere as an approximation and one defining feature of the present point model . on the other hand, we will assume that the explosion builds up a fireball , a matter- , or a blast- shell confined by a wall of fire in the close vicinity of the surface of the sphere of radius , containing all the material that originally was located inside the sphere finally compressed to a thin layer .the interior of the fireball is basically matter - free but filled with black body radiation with a temperature , i.e. with a homogeneous photon gas reaching a very high energy density and a radiation pressure of several hundred gigabars during the nuclear explosion .this picture is certainly justified within the first microseconds after ignition of the chain reaction by a neutron source , where the explosion generates very high temperatures of several , but one must keep in mind that applying concepts from equilibrium thermodynamics clearly represents an approximation . since the mean free path of the photons inside the dense plutonium core is of the order of , the major part of the energy released by fission will remain in the core during this phase .the radiation temperature must be clearly distinguished from the lower temperature distribution inside the expanding matter shell , whose detailed structure is of minor interest in the present exposition .however , the hot matter shell emits hazardous intense x - rays , ultraviolet light , visible light , infrared light , and thermal radiation to the environment . due to its symmetries , the point kinetic model shares some similarities with cosmological models of the early universe . , including all the matter originally located inside the sphere with radius .( b ) the expansion of the core is driven by the radiation pressure inside the matter - free sphere , which converts the energy released from fission to the matter s kinetic energy .( c ) the dynamics of the chain reaction is inspired from ordinary neutron kinetics and transport theory inside a homogeneous plutonium sphere.,width=234 ] the time - dependent diffusion equation for the prompt neutron flux density or the neutron density inside the sphere containing the fissionable material ( reads where the neutron density is related to the neutron flux density by the average fission neutron velocity .the fast neutron spectrum is not known with high accuracy , and the neutrons inside the sphere containing the fissionable material undergo elastic but also inelastic scattering .below , we will work with a generally accepted value for average neutron velocity of . a diffusion model according to eq . ( [ diffusion_model ] )is justified to some extent since the velocity of fission neutrons is generally much bigger than the velocity of the fissionable material . according to diffusion theory , a gradient of the neutron flux density induces a neutron current density according to fick s law of diffusion where is the diffusion constant for fast neutrons inside the material under study ,i.e. is a parameter which basically describes the local interactions of the neutrons governing how easily the neutrons can move .the divergence of the current density equals the local neutron leakage rate per volume due to diffusion \ , .\ ] ] as already mentioned we presume homogeneity in the sense that the diffusion constant and the total , fission , absorption , capture , or scattering macroscopic cross sections are assumed to be spatially constant inside the sphere where the diffusion equation will be investigated .a macroscopic cross section is the collective cross section of all atoms per volume .this collective cross section density gets permeated by the neutron flux density ; accordingly , the corresponding reaction rate per volume is given by the product .given the average number of ( prompt ) neutrons released per fission , the local neutron production rate per volume is and since the material inside the core absorbs neutron at a rate per volume one finally arrives at the spatially homogenized version of the balance equation ( [ diffusion_model ] ) of course , quantities like the density of atoms and macroscopic cross sections directly depend on the time - dependent compression factor or the corresponding density of the sphere , where is the radius of the uncompressed plutonium core .the ansatz with the inverse bomb period ( rossi alpha ) leads to in the spherically symmetric case , the laplacian becomes and making the * q*uasi-*s*tatic ansatz where one neglects the last term in eq .( [ separation ] ) one finds the well - known solution which is an eigenstate of the laplacian where is the so - called geometric buckling to be specified below , and from eq .( [ separation ] ) and follows the inverse bomb period and the so - called material buckling is given in accordance with the literature by with .space and time arguments have been omitted for the sake of notational simplicity . above , the infinite multiplication factor for a core without leakage has also been introduced .now two comments are in order .first , the separation ansatz in eq .( [ sep_ansatz ] ) is certainly not unique . a separation la , such that obviously , is impossible since the diffusion equation under study is considered on a time - dependent domain , and one should note that and are time - dependent quantities in also whose evolution in time will be governed by the dynamics induced by the chain reaction to be discussed further below .second , we need a strategy to calculate the geometric buckling appearing in the approximate ansatz ( [ nf_ansatz ] ) for . from diffusion theoryone learns for the time - independent case that the spherically symmetric neutron ( flux ) density inside a homogeneous ball with radius surrounded by a vacuum is maximal in the center ( ) and minimal at the boundary ( ) where the neutrons leak out .if for were true , the geometric buckling would be given by , so that vanishes .however , according to a common approximation inspired from diffusion theory , the neutron ( flux ) density vanishes on an imaginary sphere outside the core with a so - called extrapolated radius to be calculated below . with this boundary condition applied to the time - dependent case ,the geometric buckling becomes and can also be computed .integrating over the ball , we obtain from eq .( [ separation ] ) , disregarding the last term on the right , for the neutron number and consequently the fact that the last term in eq .( [ separation ] ) is negligible after integration represents an adiabatic approximation which can be justified by the observation that the neutron dynamics is governed by a much smaller time scale than the expansion of the plutonium core .the expansion destroys the separability of the diffusion equation . after having defined how to calculate the quasi - static neutron ( flux ) distribution and the geometric buckling in the expanding core as a quasi - static object , eq .( [ diffeq_period ] ) can be inferred as an approximation to the true neutron number dynamics . in the presence of a neutron source ( due to spontaneous fission of or an alpha - beryllium source ) ,( [ growth ] ) can be equipped with a source term using now the extrapolated radius inspired from neutron transport theory where the extrapolated neutron flux density eq .( [ nf_ansatz ] ) is assumed to vanish leads in the truly static case ( ) to the condition for the critical extrapolated radius which is related to the geometric buckling introduced above for illustrative purposes , we readily calculate the critical mass for pure plutonium in the alpha phase .the neutron cross sections averaged over the fission spectrum are given by , , , , and .the average number of fast neutrons released in a fission induced by fast fission neutrons is , delayed neutrons included in the stationary case . in the alpha phase , the density of plutonium is , from the isotope mass one calculates the density of plutonium atoms with the avogadro constant . from diffusion theory follow the approximate expressions for the diffusion constant and the transport mean free path with an approximate value of the average cosine of the neutron scattering angle , where is the atomic mass number .one finally obtains corresponding to a critical mass of an analogous calculation for pure with , , , , , , , and gives and .analogous calculations concerning the critical mass of uranium can also be found in .interestingly , doubling the density of by compressing it by a compression factor reduces the critical radius to and the critical mass to .adding an extended neutron reflecting tamper around the plutonium core also lowers the critical mass . in this case ,the spherically symmetric neutron flux density in the tamper behaves like , and the critical mass can be calculated from the critical condition that the stationary diffusion current density is continuous everywhere .we want to generalize the preliminary considerations above to a homogeneous sphere with a time - dependent radius containing with a total mass .it is straightforward to calculate relevant quantities like macroscopic cross sections discussed above from convenient quantities like the number of atoms , the volume of the sphere or the corresponding atomic density where is the momentary volume of the sphere .before the chain reaction starts , the mass and the number of nuclei is given by initial values and , respectively . from and the extrapolated radius and the corresponding material and geometric bucklings ; in the case of the geometric buckling , the quasi - static approximation is such that .the inverse bomb period follows from eq .( [ ibp ] ) and serves for the update of the neutron flux density inside the sphere . with respect to the inverse bomb period ,it is useful to introduce the ( time - dependent ) prompt neutron generation time the effective neutron multiplication factor and the reactivity which are related to the inverse bomb period via where is the average neutron lifetime in the plutonium assembly .then the neutrons inside the sphere generate an average flux density of and the power released inside the sphere is where is the average energy released per fast fission ( anti - neutrinos neglected ) .the expansion of the sphere is driven by radiation pressure in the hot phase of the explosion .assuming that the released energy is converted into black body radiation and kinetic energy of the expanding matter shell , one has an average pressure inside the sphere with the stefan - boltzmann constant and the speed of light . is the photon energy stored inside the sphere .also the decrease of the number of atoms should be taken into account since the pressure gradient drives the expansion of the sphere , one has to make a reasonable assumption at the present stage of the model concerning the behaviour of the pressure gradient . however , since the pressure will create a fireball filled with a photon gas pushing away all the matter from the explosion center , the corresponding radiation energy is converted into kinetic energy of the matter , and an ansatz describing the work done by the radiation pressure on the surrounding and moving matter distributed over an area of according to is reasonable , where is the mass of all the matter that was located inside a sphere with radius before the nuclear detonation and that got compressed onto a spherical shell with approximately the same radius .accordingly , the photon energy inside the sphere is given by in the trinity test , the plutonium core was ( probably ) surrounded by a - tamper , a aluminum pusher shell plus additional material including the high explosives surrounding the shells around the core , was modeled by with the minimum radius of the imploded core , a tamper radius , an aluminum pusher shell radius , an approximate bomb radius when the nuclear detonation starts , and the density of air .above , approximate ( de)compression effects of the bomb material when reaching a maximum core compression factor have been taken into account . the small neutron source located in the center of the plutonium core starting the chain reaction at maximum compressionwas assumed to emit enough neutrons of the order such that the chain reaction is initiated immediately when the shock wave of the conventional explosion reaches the center of the core .since delayed neutrons play no role on the time - scale of a nuclear explosion , an average number of prompt neutrons per fission was used for the simulation .using super - grade plutonium , the probability that the chain reaction starts due to spontaneous fission of is rather small for core compression times of the order of .the trinity plutonium core consisted of plutonium - gallium alloy containing gallium with a fast neutron scattering cross section of .the plutonium phase with the lowest density is the delta phase , with theoretical density of .although existing only in the temperature range of , it can be stabilized at room temperature by adding small quantities of gallium to the plutonium .the convenient non - metric units of a kiloton ( fission events ) and a shake ( ) will also be used in the following .most of the results presented in this section were calculated for a chain reaction starting at the time of maximum core compression with a compression factor of .adaptive timesteps of the order of have been used to model the actual explosion based on a simple euler discretization of the dynamical equations presented in the last section .one should note that it does not make too much sense to calculate an extrapolated radius for the neutron flux density outside the plutonium core for the uranium tamper during the intense phase of the nuclear explosion , since the average lifetime of the neutrons in the tamper is much bigger than in the core .the lifetime is given by the average length of the path a neutron travels in the uranium until it gets absorbed there by fission or capture divided by the neutron velocity .the corresponding cross sections averaged over the fission spectrum are and , and from the uranium density at room temperature and pressure and a compression factor one calculates .therefore , the neutrons need a time to adapt their distribution which is of the order of the time the main part of the energy is released by the nuclear explosion .in fact , it turns out that simply using the extrapolated length for a core surrounded by a vacuum calculated from eq .( [ extrapol ] ) already leads to reasonable explosion yields for trinity , as depicted in fig .( [ fig_compression_yield ] ) . for a compression factor of ,the yield is kilotons .however , in order to have consistency with some data used in the literature , the constant in eq .( [ extrapol ] ) was slightly reduced to in order to gauge our model to a yield of for .since the limits of the physical validity of diffusion theory are reached and since the point kinetic model is based on some strong homogeneity assumptions concerning the neutron distribution inside the core in the prompt supercritical phase and the idea of a spherical shell - like structure of the fireball , such a strategy may be acceptable .a yield of 15kt is the currently accepted value for the trinity test due to the content in the gadget only ; still , it is estimated that the tamper additionally released about , but we do not try to model this surplus in this study which finally leads to the generally accepted value for the total trinity yield of of .of course , the less relevant energy production by the tamper could also be assessed in our approach by modeling the prompt neutron leakage from the core into the tamper . in the _static _ and homogeneous case , the neutron leakage rate is given by with the geometric buckling fig .( [ fig_trinity_expansion ] ) displays the size of the fireball as a function of the time after ignition .the results are in very good agreement with the actual radii measured in the trinity test - see , e.g. , fig .( [ fig_trinity ] ) showing a high - speed rapatronic camera photograph of the fireball taken after ignition .one should note at this stage that the total yield of the nuclear explosion can be decomposed in a more elaborate model into different parts .e.g. , within a millionth of a second after the explosion , all matter including the bomb itself and the surrounding air is transformed into a very hot plasma which emits thermal radiation as x - rays , which again gets reabsorbed by the dense shock front of the fireball itself .after some seconds , the energy of the explosion can be decomposed into the blast energy , i.e. the kinetic energy transferred primarily to the air ( still about ) , thermal radiation including light ( ) , and nuclear radiation of various types ( ) . at this time, the focus of investigations in the literature is rather on the damage done to humans and the environment and the present considerations must be replaced by different physical and ethical concepts , .( [ fig_upshot_grable ] ) shows a snapshot taken in 1953 shortly after ignition of the device used in the nuclear weapons test upshot - knothole grable , where an estimated 15kt gun - type fission bomb exploded above ground , producing finally a spherically symmetric fireball , despite the asymmetric aspects of the pre - ignition device . supposing that a considerable part of the total yield of an atomic bomb is converted into the kinetic energy of the matter moving at a distance from the explosion center with a velocity , and neglecting the mass of the bomb for fireballs with a radius larger than and the energy of the radiation inside the ball , one has from the kinetic energy which equals the blast energy and therefore integrating this expression leads to with some integration constant , and finally one has for large radii , the integration constant is negligible and eq .( [ blast_radius ] ) is in good agreement with observations from nuclear tests and in very good agreement with the numerical results presented in fig .( [ fig_trinity_expansion ] ) .one also observes that doubling the blast yield by a factor of two increases the radius of the fireball at the same time after ignition only by .it is also instructive to consider the fireball expansion velocity as a function of the distance from the explosion center .the tail for larger distances in fig .( [ fig_exp_velocity ] ) corresponds to the description via eq .( [ blast_radius ] ) . after a first acceleration phase due to the violent energy release by the plutonium core, the expansion enters a deceleration phase caused by the high density of the uranium tamper .when the explosion enters the aluminum with a lower density , the pressure is still high enough to initiate a reacceleration of the expansion .finally , the rest of the bomb and the air surrounding the device are pushed away , still at a speed of the order of .( [ fig_trinity_temperature ] ) displays the radiation temperature reached inside the trinity fireball .temperatures of the order of correspond to particle energies of with boltzmann s constant .the temperature for complete ionization of plutonium is of the order of some and the binding energy of the innermost k - shell electrons is larger than , therefore , in the plasma of a fission explosion one is still far away from complete ionization , but the plutonium atom may lose a large fraction of its electrons for a very short period of some few shakes .note that for partially ionized plasma the degree of ionization can be calculated from the saha equation .an analysis of the numerical results of the point kinetic model confirmed that the total yield of the explosion of a plutonium implosion device is approximately proportional to the inserted reactivity , i.e. , the reactivity of the compressed core when the chain reaction starts , as anticipated by serber .roughly , one has in the present model of course , the total yield depends on the physical parameters governing the point kinetic model .the dependence of the yield is not very pronounced for variations of .if more energy would be released per single fission , the core would expand faster and reach second prompt criticality within a shorter period , which has a negative impact on the total yield of the device .however , it is advantageous to have fast neutrons , large fission cross sections , and a large fast fission neutron yield .the fraction of plutonium nuclei undergoing fission during a nuclear explosion is called the efficiency of the bomb .only about of the plutonium in the core was destroyed in the trinity test . in the vast majority of cases ,a plutonium nucleus splits into two smaller nuclei and 2,3 , or 4 additional neutrons . only about of fissionsare ternary fissions , producing a third light nucleus such as ( ) or ( ) .e.g. , fissioning 1000 nuclei produces on an average fission fragments with an atomic mass number of - namely , , and .these fragments are neutron - rich and tend to undergo subsequent -decays according to the decay chain with given half - lives until a ( quasi ) stable fission product like with a half - life of years is reached . during a nuclear explosion ,such -decays of fission fragments play no role due to the very short time - scale of the nuclear chain reaction .still , the initial fission fragments have some influence on the chain reaction . because they are neutron - rich , their absorptive properties are rather irrelevant , but the fragments act as additional scatterers to the neutrons , influencing thereby the neutron transport inside the core .scattering has a confining effect , since the neutron diffusion constant decreases when the neutron scattering cross section increases .the scattering cross sections of the fission fragments were not taken into account in the present simulation , but including a corresponding macroscopic scattering cross section term in the point kinetic model is straightforward .note , however , that xenon-135 has the highest known thermal neutron absorption cross section of any nuclide , namely for neutrons with a kinetic energy of . in nuclear reactors , can strongly influence the reactivity balance , but its concentration typically varies on a time - scale of the order of some hours .reactor grade plutonium contains different plutonium isotopes . whereas the heat generated by the is a problem for the integrity of the explosives surrounding the nuclear part of the bomb , is a source of fast neutrons which can start the chain reaction before the plutonium core has reached sufficiently high compression for the intended efficiency . undergoes spontaneous fissions per gramsecond , releasing neutrons at a rate of about . in most cases ,the splitting nucleus emits one , two , or three neutrons with comparable probabilities .when a neutron is released in the core , there is still some probability that it escapes the fissile zone .the probability that a fast neutron triggers a fission is given by .in the limit of an infinitely extended reactor , this probability becomes . in a critical assemblywhere , one has , i.e. only one third of the neutrons initiates a fission which releases about three new neutrons on an average .a small part of the neutrons will be captured , producing thereby , but most of the neutrons diffuse out of the core .this fact has to be taken into account when modeling fizzle probabilities . for the sake of clarity, we mention here that fissile materials can sustain a chain reaction with neutrons of any energy , whereas fissionable materials are materials that can only be made to fission with fast neutrons .fertile materials are materials that can be transformed ( i.e. , transmuted ) into fissile materials by the bombardment of neutrons inside a reactor . in this sense , is fissionable and fertile .( [ fig_trinity_fizzle ] ) shows the yield for 700 trinity explosions with randomly chosen neutron source terms corresponding to a content ranging from ( super grade to mox grade ) in the trinity core .the maximum compression factor reached has been randomly blurred in order to mimic different efficiencies of the high explosives and to render the plot more legible to the eye . for fig .( [ fig_trinity_fizzle ] ) , a compression time of was assumed from first prompt criticality until maximum compression without predetonation , corresponding to a velocity of the order of of the surface of the imploding plutonium core .the situation changes when the insertion time is doubled , since the fizzle probability is higher then , as depicted in fig .( [ fig_trinity_fizzle_slow ] ) .note that for the simulation of the trinity device , it was assumed that the chain reaction starts at full compression when the core is basically at rest .for fizzles , the contraction phase has to be modeled during which the core reaches first prompt criticality where .shortly after having reached the maximum reactivity , the expansion phase starts and the core eventually reaches second prompt criticality where again .pu content ( trinity type device).,width=415 ] one should note that even when the trinity device fizzles in the most unfavorable manner ( from the war strategic view ) , it still produces an energy of the order of or more , enough to destroy and contaminate a city district .when the chain reaction starts at a very low positive prompt reactivity , the core still has same time to further contract , eventually leading to a considerable energy release .the simulations show that the fizzle yield scales as , where is a typical initial compression velocity of the core when reaching prompt criticality . however , precise estimations concerning the minimal fizzle yield depend strongly on how the implosion of the core is modeled . in fig .( [ fig_trinity_fizzle_slow ] ) , in order to facilitate the probabilistic comparison with fig .( [ fig_trinity_fizzle ] ) , it was assumed that the homogeneously contracting core has a higher compressibility and still reaches the full compression factor when the bomb does not fizzle , although the initial contraction velocity is lower .in reality , the inserted reactivity would decrease for a lower compression velocity , leading to a corresponding reduction of the maximum yield of the device . the present work is not inteneded to model implosion scenarios , which belong to the harder part from the physical point of view .in addition , the pressure in the hot phase of the fizzle explosion is no longer radiation but matter dominated for low yields , and the limits of our model are reached .of course , the transition to the low temperature domain where the gas pressure becomes dominant could also be integrated in the point kinetic model . to take it with a grain of salt, one may argue that it is no longer relevant for the construction of a fizzle bomb with low quality plutonium to integrate an efficient neutron source in the core , but to achieve a fast compression of the core with efficient high explosives .pu content ( trinity type device ) for slow reactivity insertion.,width=415 ] furthermore , a high content is disadvantageous for the ( fizzle ) yield since the fission cross section of averaged over the fission spectrum is , whereas for one has .the impact of is relatively small .the construction of a bomb with low quality plutonium would therefore involve high costs and comparatively low explosive yield .it is a striking fact that sophisticated simulations of nuclear explosions do not necessarily lead to better results from a numerical point of view for many characteristic quantities than an analysis based on high school mathematics like the one presented in this work .this is due to the complexity of the non - equilibrium thermodynamics involved in nuclear explosions and the lack of some specific exact experimental data .the present paper is a reminiscence of robert serber s lectures which were given in 1943 to new members of the manhattan project with the aim to explain the basic scientific facts of the wartime enterprise , and which assembled in note form and mimeographed became the legendary la-1 , the los alamos primer , classified secret - limited for twenty years after the second world war . despite its simplifications ,the point kinetic model provides a flexible approach to the physics of the uncontrolled chain reaction .performing the corresponding simulations is straightforward and may help students to interpret the extreme physical conditions that briefly occur within nuclear fission bombs .the influence of engineering details like tampers and other material surrounding the core can be studied .effects which are relevant , e.g. , for discussions concerning the non - proliferation of nuclear weapons , can be accessed with ease .one - dimensional numerical simulations beyond the point kinetic model including radiative transport phenomena for the spherically symmetric case will be presented in a forthcoming paper .the lesson that relatively simple toy model calculations are sufficient to get approximate estimates of the yields of nuclear detonations does not constitute a real security problem . the main obstacle for terrorist organizations to build an atomic bomb is the acquisition and technical handling of the potentially highly radiotoxic material with an acceptable isotopic vector needed for an efficient device .however , for developing countries , such problems will play an increasingly smaller role in the near future .possession of nuclear weapons may prevent states from entering into destructive wars .emerging decadent societies in decadent democracies , which are already in possession of numerous nuclear weapons and which are no longer able to elect wise people as their leaders or to control their nuclear inventory , may initiate the final countdown to zero for the third atomic bomb that will be deployed in an act of war .in such a case the question remains whether the people are still able to apply ideas from the age of enlightenment , like they are found in the declaration of independence , urging the people to act .for the sake of future generations , it is the duty of every person never to forget about and to warn of the terrible destructive force of ( thermo-)nuclear and other weapons of mass destruction .the author wishes to thank josef ochsner from the paul scherrer institute in switzerland for valuable comments and carefully reading the manuscript .m. gttsche , g. kirchner , _ improving neutron multiplicity counting for the spatial dependence of multiplication : results for spherical plutonium samples _instr . meth .res . a * 798 * ( 2015 ) 99 - 106 . | a concise point kinetic model of the explosion of a prompt supercritical sphere driven by a nuclear fission chain reaction is presented . the findings are in good agreement with the data available for trinity , the first detonation of a nuclear weapon conducted by the united states army as part of the manhattan project . results are presented for an implosion device containing pure plutonium-239 , although the model can be easily applied to , e.g. , uranium-235 . the fizzle probability and corresponding yield of a fission bomb containing plutonium recovered from reactor fuel and therefore containing significant amounts of spontaneously fissioning plutonium-240 which can induce a predetonation of the device is illustrated by adding a corresponding source term in the presented model . related questions whether a bomb could be made by developing countries or terrorist organizations can be tackled this way . although the information needed to answer such questions is in the public domain , it is difficult to extract a consistent picture of the subject for members of organizations who are concerned about the proliferation of nuclear explosives . + 0.1 cm * physics and astronomy classification scheme ( 2010 ) . * 28.20.-v neutron physics ; 25.85.ec neutron - induced fission ; 28.70.+y nuclear explosions . |
it is known that the special relativity theory is the physical theory of measurement in inertial frame of reference proposed by albert einstein in 1905 , after substantial contributions of hendrik lorentz and henri poincar ( , ) .this theory gave us which led to the atomic bomb and unlocked the secret of the stars .at that time the expansion nature of the universe was not discovered yet , and this information was not included in the formulation of the special relativity theory . in 1915 albert einstein proposed the general relativity theory , in which the expansion of the universe was not the main objective .he proposed that gravity , as well as motion , can affect the intervals of time and of space .the general relativity theory gave us : the space warps , the big bang , and black holes .the metric of the space - time proposed in general relativity does not allow the incorporation of the expansion in the the lorentz transformation ( since the increasing function of time , that characterizes the expansion nature of the space , omits the linearity in the considered transformations ) , and then it does not permit to perceive if the laws of nature proposed by the special relativity are affected or not together with the universe expansion .many essays have been achieved in this purpose in the last hundred years , which can be found easily in the literature .the only cited papers in this article are those who have a clear and direct relationship with this kind of approach . given that the expansion was studied in the general relativity theory , it seems counter nature to discuss universe expansion in the special relativity theory .however , discussing the effect of the expansion on the laws of nature in the special relativity is different , and might entail some disturbing new concepts which will challenge deeply our deep - rooted understanding of the relativistic laws of nature . in this paper we will explore the incorporation of a special expansion in the special relativity theory that preserves the notion of inertial frame , and we will rewrite and interpret the relativistic laws of nature in the light of this special expansion . in this articlewe introduce a practical quantification of the space expansion in our universe using a metric with a numerical product that quantifies the expansion of the space step by step ( we will call it universe expansion ) , and maintains the notion of inertial frame on it . in the second section we rewrite the lorentz transformation equations using the new metric to obtain a family of lorentz transformation equations , where each transformation represents the lorentz transformation equations at the step n of the space expansion .this family of transformation leads to a decreasing limiting velocity of moving bodies from one step to another , and then the physical laws are affected by the universe expansion , followed by conclusions derived concerning energy , masses , and momentum in an expanding space . in section three ,the composition of velocity is derived using -lorentz transformation equations to discuss the invariance of the new limiting velocities of matter .the rest mass of photon is then discussed in section forth and new quantum formalism is introduced .we conclude this section by discussing an inelastic collision in which we find that the conversion of the rest - energy into kinetic energy is affected by the universe expansion as well as the fact that the total liberated energy by fission of atoms is increasing naturally together with the universe expansion .we conclude by some comments related to the notion of time and proper time . from the rate at which galaxies are receding from each other in our universe, it can be deduced that all galaxies must have been very close to each other at the same time in the past .meanwhile galaxies are receding from each other what really happen to the physical laws , are they affected by the process of expansion or not ? in the purpose to investigate the status of the physical laws together with the universe expansion we propose the following quantification of the universe movement : suppose that we can quantify the whole universe movement , from the beginning of the universe expansion where matter was together until present time with current picture of matter distribution .suppose that we can subdivide the whole process of expansion from the beginning until present time into n steps , in which we approximate the rate of change of the distance between any two separated events from one step to another together with the universe expansion as follow : : _ if the distance between two events at the step n is equal to , and at the step ( n+1 ) is equal to then for all , where is a sequence such that , , and converges . _ for this quantificationthe step 0 represents the beginning of the expansion of our universe ( once ) and the space - time is defined as the set of events together with the notion of squared interval defined between any two events .an event in a reference frame is fixed by four coordinates , , and , where are the spatial coordinates of the point in space when the event in question occurs at the step 0 , and where fixes the instant when this event occurs .another event will be described within the same reference frame by four different coordinates , , , and .the points in space where these two events occur are separated by the distance given by .the moments in time when these two events occur are separated at the step 0 by a time interval given by .the squared interval between these two events is given as function of these quantities , by definition , through the generalization of pythagorean theorem where is the maximum speed of signal propagation at the beginning of the universe expansion ( the maximum speed for the transmission of information ) .at the step 1 of the universe expansion , the event in the reference frame is fixed by four new coordinates and , where are the spatial coordinates of the point in space when the event in question occurs at the step 1 , and where fixes the instant when this event occurs , meanwhile the event is described within the same reference frame by four different coordinates , , , and .the points in space where these two events occur are separated by the distance given by . while the universe expands from the step 0 to the step 1 at a rate such that , and , then we have .the moments in time when these two events occur are separated at the step 1 by a time interval given by .the squared interval between these two events is given as function of these quantities , by definition , through the generalization of pythagorean theorem where is the maximum speed of signal propagation at the beginning of the universe expansion , etc .. at the step n of the universe expansion , the event in the reference frame is fixed by four coordinates , , and , where are the spatial coordinates of the point in space when the event in question occurs at the step n , and where fixes the instant when this event occurs , meanwhile the event is described within the same reference frame by four different coordinates , , , and .the points in space where these two events occur are separated by the distance given by . while the universe expands from the step ( n-1 ) to the stepn at a rate such that , and , then we have the moments in time when these two events occur are separated at the step n by a time interval given by .the squared interval between these two events is given as function of these quantities , by definition , through the generalization of pythagorean theorem which gives where is the maximum speed of signal propagation at the beginning of the universe expansion .the above metrics , from the beginning of the universe expansion ( step 0 ) until the present time , quantify the expansion of the space - time step by step .indeed , a ball defined by will increase if its radius increases , and we have for , , however , for all , the sphere is an expanding sphere given by where is an increasing and bounded product .the infinitesimal space - time interval at the step n , , between two neighboring events within a locally inertial frame metric takes then the form in which represents the expansion step of the physical universe . in such locally inertial reference framethe laws of the special relativity apply at each step .we propose to rewrite the physical laws in each space - time step by step to investigate any possible evolution in the laws of nature .the metric ( [ m1 ] ) can be written as with , , and .let us consider an expanding space in which the proper distance between any distant points is increasing following the above quantification such that _ at small scale as well as at the subgalactic scale the fundamental forces of nature maintain the form of matter we have today_. let us consider that the expansion is totally modeled by an adequate numerical quantification represented by the product introduced above ( that is to say if the position of a body at rest or the location of an event is expressed locally via three space coordinates in an inertial frame at the step 0 , then at the step 1 its location at rest is expressed in the same frame by , at the step 2 is expressed in the same frame by , and at the step n its location at rest is expressed by , where n represents the expansion step of the space movement quantification ) .let us consider two cartesian frames and in such that at each expansion step the equations of newtonian mechanics hold good .suppose that an event has cartesian coordinates relative to , and relative to at the same step of expansion .in general for a linear transformation of the cartesian frames we have where the coefficients depend on the movement . the general properties of homogeneity and isotropy of the space with the choice that the frame moves in the x direction of with uniform velocity ( without losing generality ) , such that the corresponding axes of and remain parallel throughout the motion having coincided at will give suppose that a light signal is emitted from the common center of the two frames at . since the space is expanding , then the observer in the frame will observe that the light travels in all direction in an expanding sphere , at the step n , given by and from the frame , the observer perceives the same and we have substituting formula ( [ e1 ] ) in ( [ e2 ] ) gives,which yields the following system the resolution of the system ( [ e3 ] ) leads to the lorentz transformation equations with a new limiting velocity the first immediate consequence from the lorentz transformations in an expanding universe is the following : there exists a limiting velocity for moving bodies given by which depends on the fossil light velocity ( velocity of light at the beginning of the universe expansion ) and the expanding parameter of the universe .indeed , is an increasing and bounded product greater than one , then the limiting velocity is decreasing along with the space expansion of the universe .it turns out that the limiting velocity was maximal at the beginning of the expansion of the universe ( step 0 ) and was equal to , that is why we call it the fossil velocity of light .thus the limiting velocity of moving body , including the light , will decrease in an expanding universe .however , the physical meaning of the equations ( [ e4 ] ) obtained in respect to moving rigid bodies and moving clocks remain the same as for the classical lorentz transformations ( deformability and lose of synchronization , time transformations , etc ) except the fact that the expansion of the universe will be involved in all formulation of the physical laws as we will see later on . as a consequence of these new transformations and taking into account the dynamic of the universe, we propose to adjust the einstein s equation relative to the mass - energy as follow : the total energy of moving body at the step n of the space expansion is given by the law and the body s rest - mass energy at the step n of the space expansion is given by the law where represents the mass of a body at rest relative to the observer in an expanding universe at the step n. the relative mass at the step n is given by the law indeed , using the newtonian formula , the equation ( [ e6 ] ) , and the fact that the newton s law remains valid if it is written as give the following : on the other hand we have and which yields in ( [ n1 ] ) then with . by integrationwe obtain for , and , we obtain and then which gives equation ( [ e7 ] ) . on the other hand the law ( [ e7 ] ) leads to the law ( [ e6 ] ) , indeed , the work of displacing a body by a distance at the stepn is where is the velocity vector .if this work serves to increase the energy of the body at the step n , then then since is given by ( [ e7 ] ) , we have by substitution we obtain then which gives together with formula ( [ dm ] ) that is to say the law ( [ e6 ] ) .the universe dynamic does not affect the mass of a given object at rest .however , object at rest has a rest energy given by ( [ e0 ] ) in which the universe movement is manifested .when the object starts moving under any force , its total energy , momentum and mass are directly affected by the expansion parameters of the universe .if the speed is the fossil velocity of light , then the term in laws ( [ e6 ] ) , ( [ e0 ] ) , and ( [ e7 ] ) represents the current experimental velocity of light , of course if we consider that the present time corresponds to the step n of the universe expansion , and it constitutes a limiting velocity for any motion or transfer of interaction at the step n of the universe expansion . for small values of ,the total energy ( [ e6 ] ) goes over into the classical expression of the kinetic energy as shown by which gives the first term represents the energy at rest , and the second term represents the newtonian kinetic energy .the relativistic kinetic energy or the motion energy at the expansion step n can be defined by subtracting the rest energy from the total energy the substitution of formula ( [ e6 ] ) and ( [ e0 ] ) in the last equality gives and using relative mass ( [ e7 ] ) we obtain the kinetic energy of a body is not only related to the increase of masses when they are moving as asserted by the special relativity but also related to the universe expansion .the use of equation ( [ e7 ] ) in the equation of motion ( force is equal to the rate of change of momentum ) remains valid .however , here the momentum is now at the step n given by : the equation determines the motion of a body acted on by any force , but now correctly takes expanding effects into account .clearly , the form of the momentum and force equation is very similar in relativity theory and in our approach ; the effective mass depends on the speed of motion of the body relative to the observer and the n - th step expansion parameter according to the formula ( [ e7 ] ) , while in relativity theory it is independent of the universe expansion .the crucial feature is that the effective mass diverges as tends to the limiting velocity and so the momentum also diverges . as for relativity theory , the energy and momentum are related . indeed , using formula ( [ e6 ] ) and ( [ e6 ] ) we obtain the following relation for special relativity we will use the classical definition of velocity .if we call the velocity of an object in the frame , its components with respect to the three axes are .the same velocity is measured in the frame as with components given by for simplicity and without losing generality , we will consider only that the velocity is parallel to the x - axes .a simple differentiation of the equations of lorentz transformations gives then we have which gives since is parallel to the x - axes , then the law of composition of velocity is given by and inversely which has the form of the relativist law of composition of velocity with the appearance of the term that characterizes the expansion of the universe at the step n. in the extreme case where an observer in the frame is emitting light signals in the direction of x - axis , since the frame is in an expanding universe at the step n , then the velocity of these signals in the frame is given by which gives which gives and then the limiting velocity of light at the stepn is invariant , that is to say the velocity of light `` measured '' in a moving frame at the step n , appears to be equal to in any direction .the speed of light in vacuum , at the step n of the universe expansion , will be found the same ( equal to ) by any two observers in uniform relative motion and this is true for the whole process of universe expansion , meanwhile this speed is decreasing together with the universe expansion . for lorentz transformation gives as an invariant limiting velocity of matter , meanwhile gives as the invariant limiting velocity of matter , and so on , until the lorentz transformation which gives as the invariant limiting velocity of matter .it turns out that the limiting velocity of moving bodies is decreasing together with the universe expansion and that the velocity of light in empty space is independent of the velocity of its source _ at each step _ of the universe expansion despite of its decreasing nature together with the universe expansion . in the whole process of universe expansionthe velocity of light remains independent of the reference frame of the observer .the formula ( [ e6 ] ) can be justified exactly as the famous formula ( einstein ) , by using the new composition of velocity provided by the lorentz transformation and the momentum conservation at the step n.the quantification of the expansion introduced above leads us to set up different steps - lorentz transformations that permit to quantify the physical laws evolution together with the universe expansion , and derive a relationship between the space - time views obtained by any two observers in an expanding universe .the result obtained unifies the fundamental special - relativity results of time dilation , length contraction , and the relativity of simultaneity , in a single relation for different steps of quantified expansion .however , from these different lorentz transformations , we sort out that the limiting velocity of matter is decreasing from one step to another including the velocity of light . the velocity of light at the beginning of the expansion of the universe ( step 0 ) was constant and equal to an unknown value , and the -lorentz transformation equations are given by this transformation leads to the relativistic formulation of the physical laws on the other hand , at the step 1 the -lorentz transformation equations become from which we sort out that the matter has a new limiting velocity . at this stepall the equations of newtonian mechanics hold good , and the formulations of the physical laws are obtained under some changes relative to the new limiting velocity .similar formulations of the physical law equations ( [ t0 ] ) are introduced with the new limiting velocity and denoted by at the step n , the -lorentz transformation equations become in which the limiting velocity of matter reaches the value of , and where all formulations of the physical laws are possible to obtain under some changes with the hypothesis that the equations of newtonian mechanics are valid at this step of the universe expansion ( conservation of momentum , conservation of energy , etc ) . the new limiting velocity at the stepn induces some change in all formulations ( as regards both mechanics and electro - magnetism in inertial systems ) , including the maxwell equations , to obtain that the physical law remains the same under the -lorentz transformations .the formulation of the physical law equations at the step n with the new limiting velocity are denoted by where the quantities , , , and take the form ( [ e6 ] ) , ( [ e0 ] ) , ( [ ec ] ) , ( [ e6 ] ) , and ( [ e7 ] ) respectively under the -lorentz transformation for all .the lorentz transformations , , , and have the same mathematical form , and all the physical quantities are affected by the space expansion from one step to another , except the rest - mass which is invariant under universe expansion . since the light has no rest frame , and since the formalism must be the same for all matter , the quantity in the above formalism represents in reality the mass of matter which is invariant under universe expansion , we will call it the _ invariant mass_. the family of transformation leads to fatal variation of the limiting velocity of moving bodies according to the universe expansion .this has interesting consequences . in the expansion processthe rest energy at the step n can be evaluated compared to the rest energy at the beginning of the universe expansion .indeed , the rest energy at the step n is given by the law and using the rest energy notation we obtain which means that the rest energy of matter is decreasing along with the universe expansion .however , the relative mass of matter is increasing together with the universe expansion , indeed , from the law ( [ e7 ] ) at the step n and the same law at the step n+1 we sort out that and we obtain for all which gives this means that the relative mass of matter in our universe is increasing along with the universe expansion as well as the momentum , meanwhile the rest energy is decreasing .it is possible to determine the evolution of energy , momentum and relative mass together with the universe expansion by using the invariant mass , indeed , it is not difficult to write all these quantities at the step n function of their values at the step 0 ( beginning of the expansion ) and we have the law of evolution of the relative mass together with the universe expansion given by meanwhile the evolution of momentum together with the universe expansion is given by the law and the total energy evolution together with the universe expansion is given by the law however , the evolution of the physical laws from one step to another are given by the following formulas : the evolution of the momentum from one step to another is given by and the evolution of the total energy from one step to another is given by the law it turns out that the concept of expansion is a real source of energy , and this will be analyzed later on with a concrete example .the velocity of light was constant at the beginning of the expansion of our universe and equal to ( an unknown fossil value ) .what we measure today in our experiment and what we call the velocity of light , corresponds in reality to the velocity of light at the expansion step n , where its value is given by .the light is described in quantum mechanic as a quanta of zero mass where the relation between energy and frequency is given by the planck - einstein s relation and where the momentum is given by the relation where the wave length given by the rules of translation from corpuscular terminology to wave terminology , and vice versa are based on the fact that an electromagnetic wave of length and intensity is considered as a stream of photons of frequency and intensity where is the number of photons passing per unit time through unit areas , and the direction of motion of the wave front is the direction of motion of the photons . in the purpose to integrate the new formalism of expansion in quantum formulation we introduce the following : we denote the fossil wave length of the light at the beginning of the universe expansion where the celerity of photon was , with the fossil frequency of the light at that time .we denote the current wave length of the light at the step n of the universe expansion , where the celerity of photon is , with the light frequency at the expansion step n. hence the wave length represents a distance , therefore by using the property of the expansion of the space the wave length expands and we have which leads to the following relation between the fossil light frequency and the light frequency at the expansion step n and then from the relativistic relation between energy and momentum , we sort out the momentum of photon with velocity given by thus and well defined for , then which gives that the rest energy of a photon is not equal to zero .the substitution of ( [ evl ] ) in the formula ( [ pv ] ) gives the precedent equality can be expressed using the rest energy ( [ en ] ) at the step n and as therefore the energy and momentum of photon are related by the law the law ( [ pn ] ) leads to the following approximation where is the limiting velocity of light measured at the step n. the relation ( [ c ] ) was introduced experimentally by a. h. compton in 1923 , . using the formulation given by lorentz transformation , the relative mass of moving bodyis given by which gives then which gives then for all we have nevertheless , if the invariant mass of a photon was zero at the origin of the expansion when its velocity was , it will remain zero at the stepn when the light move at speed because of its natural invariance under the universe expansion . if we suppose that the invariant mass of a photon at the step n , , ( since for the formula ( [ e8 ] ) is not defined for ) from the universe expansion , then the speed of light is given by , and by substitution in the the formula ( [ e8 ] ) we obtain using the value of given by formula ( [ pf ] ) , the equality ( [ eq ] ) becomes on the other hand we have ( [ evl ] ) , and then equality ( [ eq1 ] ) gives which is equivalent to since the rest energy of a photon at the beginning of the universe expansion ( step 0 ) was not null , then formula ( [ con ] ) leads to the contradiction then the invariant mass of photon . along the universe expansionthe invariant mass of matter remains the same , meanwhile the associated rest energy decreases ( ) , and the momentum of matter increases . if the rest energy of photon was zero , it can not decrease . since , , then we can evaluate the invariant mass for a photon of velocity .indeed , we have the use of the formula ( [ evl ] ) in the precedent equality gives and we have using ( [ en ] ) and the velocity of light at the step n to obtain which means that the ratio between rest energy and velocity of light is invariant under universe expansion . the formula ( [ ev0 ] ) means that the light has non zero invariant mass . however , the invariant mass of a photon depends on the fossil rest energy ( the rest energy at the step n is impossible to determine since the light is never at rest ) .we know only that the invariant mass and the rest energy of a photon are not null , they might be small , very small , but not null . in the purpose to sort out the real nature of energy gained by matter in an expanding universe we introduce the following example of inelastic collision : consider two identical particles which move toward each other along straight line , with equal speeds , and equal invariant mass , they collide and stick together in an expanding space at the step n. the conservation of momentum at the step n gives : thus which gives , so the final object is at rest with mass .the conservation of total energy at the step n from the universe expansion gives since the final object is at rest ( that is to say ) , then it follows that then we obtain the mass of final object is larger than the sum of the original masses . to evaluate the nature of energy created with the increase of mass together with the universe expansion from the step 0 to the stepn we have to compare the same collision at the origin of the expansion ( step 0 ) and at the step n. indeed the universe expansion creates the increase of moving masses as for a small value of we can approximate ( [ mi ] ) as which gives the gain of masses represents a rest energy at the stepn given by this means that the rest energy gained by a moving particles together with the universe expansion is equal approximatively to the sum of the classical kinetic energy of the two particles factor the rate of increase of rest energy due to the universe expansion , that is to say .indeed , from formula ( [ en ] ) it is easy to evaluate the variation of the rest energy between the beginning of the universe expansion ( step 0 ) and the universe expansion at the stepn , we obtain from which it turns out that represents the rate of decrease of the rest energy together with the universe expansion . since the mass of final object is larger than the sum of the original masses , we can approximate this excess of mass for a small value of , and compare it to the same value at the beginning of the universe expansion .indeed the approximation of the law ( [ mn ] ) , for all , gives which gives and then we obtain it turns out that the excess mass of the composite object at the step n of the universe expansion compared to the mass of the same composite object at the beginning of the expansion ( step 0 ) is increasing together with the universe expansion and then the kinetic energy brought in also increases . the lost kinetic energy has been converted to rest energy ( mass ) .the classical explanation for the loss of kinetic energy attributes it to conversion into thermal energy ( heat ) : the final object will have a higher temperature or a large internal energy . now since the mass of the final object ( after collision formula ( [ mn ] ) will have an increasing total mass , then this means that the temperature of the final object will increase together with the universe expansion .since many evidences prove that early earth was molten from bombardment of rocks and mini planets and others , then we can say that there must exist a natural factor that contributes to the global warming of our planet due to the universe expansion ( the global warming of our planet is not only attributed to human activities ! ) .suppose that we have an atom of uranium with rest mass measured in an expanding universe at the step n , and suppose that something happens so that the atom flies into two equal pieces moving with speed , so that each part has a relative mass at the step n .suppose that these pieces encounter enough material to slow them up until they stop and then each part will have an invariant mass .to reach its rest position each part will give an amount of energy left in the material in some form , whatever . the left energy for one partis then given by the conservation of total energy in an expanding universe at the step n gives so the total liberated energy at the step n is given by then which can be written as where , .the law given by ( [ en2 ] ) for was used to estimate how much energy would be liberated under fission in the atomic bomb since the mass of uranium atom was known as well as the atoms into which it splits ( iodine , xenon , and so on ) .however , it turns out from the law ( [ en2 ] ) that the released energy when an atom of uranium undergoes fission is correlated to the universe expansion which has interesting consequences .as the universe expands we have the law ( [ mc ] ) , which gives and from ( [ en2 ] ) we have using the law ( [ en0 ] ) we obtain this means that the ratio between the total liberated energy by an atom of uranium and its rest energy is increasing together with the universe expansion .the total liberated energy by an atom of uranium ( [ en3 ] ) can be approximated , indeed , we have then we have by using the law ( [ en ] ) that expresses the correlation between the rest energy at the step n and the rest energy at the step 0 , we obtain for a small value of , we obtain the following approximation which gives then it turns out that the total liberated energy by an atom of uranium is increasing together with the universe expansion and we have for one atom of uranium the following evolution of liberated energy from one step to another this has interesting consequences that will change our understanding of gravity , the missing masses , or the estimation of stars energy and others .the energy in our universe increases together with the universe expansion , and this excess of energy is naturally due to the recession of galaxies from each other .the increase of energy and mass in our universe compensates the increase of distance between matters and might explain why the gravity exists at long distance .our estimation of stars energy , or galaxies masses are erroneous if we omit to take into account the movement of matter due to the universe expansion .we conclude by raising the fatal fact that there is no stationary safe distance when an atom of uranium undergoes fission in an expanding universe , since the liberated energy increases along with the universe expansion , the security of nuclear reactor must be endlessly reconsidered .it turns out that the tame of this energy is strewed with risk and peril together with the universe expansion .the family of transformations , , , , is a mathematical formulation of the lorentz transformation that incorporates the given quantification of the space expansion .this family of transformations is defined for all , where represents the limiting velocity of moving bodies at the expansion step n. from this limiting velocity it turns out that the velocity of light is decreasing as the universe expands , and the limiting velocity was maximal only at the step 0 ( in the beginning of the universe expansion ) , however , at each step the velocity of light is constant in the sense that it is independent of the reference frame of the observer .all formulations and laws based on the assumption that the velocity of light in empty space is independent of the velocity of its source remain valid for matter and related physical phenomenon during the universe expansion step by step , and the relativistic formulation of the physical laws remains valid _ experimentally _ since the velocity of light at the step n is experimentally given by which represents in this formalism , that is to say where c is the fossil velocity of light ( the fossil velocity of light could be detected in our space ) . at each step of the universe expansionthe measured velocity of light remains invariant from one frame to another in uniform relative motion .the meaning of this approximation based on the universe quantification is that the velocity of light is constant locally and variable globally with respect to the universe age , and what we mean by constant locally is that the variation of the velocity of light is too small to be detected in short time with respect to the universe age .all our experimental tests and applications of the special relativity laws at present time are and remain valid since the substitution of the experimental measure of the light speed in the -lorentz transformation will represent -lorentz transformations .indeed , the velocity of light is locally constant at the scale of normal time ( using a short interval of time as unit ) and globally variable at the scale of cosmical time ( using a large interval of time as unit ) .this local and global behavior can be derived straight forward from the quantification .indeed , the local behavior is reached in the quantification if we use a big number of subdivisions : the bigger the used number of steps is the shorter the time interval of steps we obtain .thus , as tends to infinity , we have as a consequence of the convergence of the product , then where is a large positive real number . hence the equations ( [ cv ] ) and ( [ c ] ) give therefore the velocity of light is almost constant for short period of time ( locally ) . however , the equation ( [ cst ] ) is not valid anymore for the large period of time .indeed , the smaller the used number of steps is the bigger the time interval of steps we obtain , and in that case we have for all and hence the equations ( [ c ] ) and ( [ cv0 ] ) lead to the following inequality : that is to say the velocity of light is decreasing globally together with the universe expansion from one step to another .the formalism obtained within this work using a special expansion notion in the special relativity theory to study the uniform motion of the observer relative to the source is based on the validity of the special relativity theory step by step from the big bang to nowadays . which leads to the conclusion that this new formalism is an extension of the special relativity theory .we close this article by some comments on the notion of time .the notion of relative time as for special relativity remains valid with some adjustment , from the metric ( [ m1 ] ) is clear that time is a required coordinate to specify a physical event and that this quantity can not be defined without the notion of space and the notion of expansion , the dynamic of the universe is involved in the definition of time .the expansion of the universe affects the time and the space .if we refer to the proper time as the time which is measured by an observer in a reference frame at the step n of the universe expansion in which the events occur at the same spatial point , then and the metric ( [ m1 ] ) gives meanwhile in another inertial reference frame the same events will verify ( [ m1 ] ) , which gives from which we obtain where .it turns out that the rate of proper time for a system varies with motion through space as for special relativity , and what is new here is that the rate of the proper time varies together with the universe expansion even if is constant .if we refer to the universe proper time as the time which is measured by an observer in a reference frame at rest at the step n of the universe expansion in which the events occur at the same spatial point , it is not difficult to extract from the metric ( [ m1 ] ) the relationship between proper time and coordinate time the time depends on the observer and the universe expanding step .what is new here is to associate space , time and expansion in the definition of event .any change of reference or steps affects all of them .if the proper time of the universe is the same from one step to another and if the velocity and are constant from one step to another then it turns out from the step to the step that and then which allows us to compare the coordinate time between successive steps as which means that the clocks at the step n will run slower than an identical type of clocks at the previous step , the time will run slower together with the universe expansion .one more thing , the product which allows the quantification of the universe expansion was extracted from the fractal manifold model .more details about the natural construction of the expanding parameter and the universe expansion as well as the universe shape can be found in .f. ben adda , _ mathematical model for fractal manifold . _ , international journal of pure and applied mathematics , * 38 * , n 2 , 159 - 190 , ( 2007 ) .f. ben adda , _ new understanding of the dark energy in the light of a new space time _ , in proceeding of the invisible universe , aip conference proceedings ,vol 1241 , pp 487 - 496 , ( 2010 ) .a. compton , _ a quantum theory of the scattering of x - rays by light elements _ , physical review , * 21 * , 483 - 502,(1923 ) .a. einstein , _ does the inertia of body depend upon its energy content?_,translated from `` ist die tr eines k von seinem energiegehalt abh ? '' , annalen der physik , * 17 * , ( 1905 ) .a. einstein , _ kosmologishe betrachtungen zur allgemeinen relativit _ , _ preussische akademie der wissenschaften , sitzungsberichte _ , 142 - 152 , ( 1917 ) .h. poincar , _ sur la dynamique de l _ , comptes rendus de lacad des sciences de paris * 140 * , 1504 - 1508 , ( 1905 ) .h. poincar , _ sur la dynamique de l _ , rendiconti del circolo mathematico di palermo * 21 * , 127 - 175 , ( 1906 ) . | in this paper , we present a new formulation of lorentz transformations using a metric that quantifies the space expansion . as a consequence , we sort out that the limiting velocity of moving bodies is decreasing together with the space expansion . a new adjustment of relativistic laws is added to incorporate the non static nature of space - time . the conservation of the physical laws at each step of the quantified expansion allows the obtaining of new formalisms for the physical laws , in particular when an object starts moving under any force , its total energy , momentum and mass are directly affected by the expansion of the space . an example of inelastic collision is studied and several conclusions derived , specially the example of fission of atoms leads to clear correlation between liberated energy and universe expansion , it turns out that the liberated energy is increasing together with the universe expansion . |
we call a pseudo - boolean function . the present work discusses approaches to obtain heuristics for the program \text{subject to } & \end{tabular}\ ] ] using sequential monte carlo techniques . in the sequel, we refer to as the _ objective function_. for an excellent overview of applications of binary programming and equivalent problems we refer to the survey paper by and references therein .the idea to use particle filters for global optimization is not new [ , section 2.3.1.c ] , but novel sequential monte carlo methodology making use of suitable parametric families on binary spaces may allow to construct more efficient samplers for the special case of pseudo - boolean optimization .we particularly discuss how this methodology connects with the cross - entropy method which is another particle driven optimization algorithm based on parametric families . the sequential monte carlo algorithm as developed byis rather complex compared to local search algorithms such as simulated annealing or -opt local search which can be implemented in a few lines .the aim of this paper is to motivate the use of advanced particle methods and sophisticated parametric families in the context of pseudo - boolean optimization and to provide conclusive numerical evidence that these complicated algorithms can indeed outperform simple heuristics if the objective function has poorly connected strong local maxima .this is not at all clear , since , in terms of computational time , multiple randomized restarts of fast local search heuristics might very well be more efficient than comparatively complex particle approaches .the article is structured as follows .we first introduce some notation and review how to model an optimization problem as a filtering problem on an auxiliary sequence of probability distributions .section [ sec : smc ] describes a sequential monte carlo sampler designed for global optimization on binary spaces .section [ sec : pfbs ] reviews three parametric families for sampling multivariate binary data which can be incorporated in the proposed class of particle algorithms .section [ sec : algorithms ] discusses how the cross - entropy method and simulated annealing can be interpreted as special cases of the sequential monte carlo sampler . in section [ sec : applications ] we carry out numerical experiments on instances of the unconstrained quadratic binary optimization problem .first , we investigate the performance of the proposed parametric families in particle - driven optimization algorithms .secondly , we compare variants of the sequential monte carlo algorithm , the cross - entropy method , simulated annealing and simple multiple - restart local search to analyze their respective efficiency in the presence or absence of strong local maxima .we briefly introduce some notation that might be non - standard .we denote scalars in italic type , vectors in italic bold type and matrices in straight bold type .given a set , we write for the number of its elements and for its indicator function . for denote by the discrete interval from to . given a vector and an index set , we write for the sub - vector indexed by and for its complement .we occasionally use the norms and . for particle optimization, the common approach is defining a family of probability measures associated to the optimization problem in the sense that where denotes the uniform distribution on the set and the set of maximizers .the idea behind this approach is to first sample from a simple distribution , potentially learn about the characteristics of the associated family and smoothly move towards distributions with more mass concentrated in the maxima .we review two well - known techniques to explicitly construct such a family .we call a tempered family , if it has probability mass functions of the form where .as increases , the modes of become more accentuated until , in the limit , all mass is concentrated on the set of maximizers .the name reflects the physical interpretation of as the probability of a configuration for an inverse temperature and energy function .this is the sequence used in simulated annealing .we call a level set family , if it has probability mass functions of the form where \leq 1} ] and ^{-1} ] with as a _ particle system _ with particles .we say the particle system _ targets _ the probability distribution if the empirical distribution converges to for .we sample with and set to initialize the system .suppose we are given a particle approximation of and want to target the subsequent distribution . for all and , we update the weights to and denotes the unnormalized version of .the normalizing constants and defined in equations and are unknown but the algorithm only requires ratios of unnormalized probability mass functions .we refer to as the _ step length _ at time . after updating , the particle system targets the distribution we choose larger , that is further from , the weights become more uneven and the accuracy of the importance approximation deteriorates .if we repeat the weighting step , we just increase and finally obtain an importance sampling estimate of with instrumental distribution .this yields a poor estimator since the probability to hit the set with uniform draws is and decreases rapidly as the dimension grows .the pivotal idea behind sequential monte carlo is to alternate moderate updates of the importance weights and improvements of the particle system via resampling and markov transitions .the importance weight degeneracy is often measured through the _ effective sample size _criterion defined as .\ ] ] the effective sample size is if the weights are uniform , that is equal to ; the effective sample size is if all mass is concentrated in a single particle .given any increasing sequence , we could repeatedly reweight and monitor whether the effective sample size falls below a critical threshold .for the special case of annealing via sequential monte carlo , however , the effective sample size after weighting is merely a function of . for a particle system at time , we pick a step length such that that is we lower the effective sample with respect to the current particle approximation by some fixed ratio [ , ] .this ensures a ` smooth ' transition between two auxiliary distributions , in the sense that consecutive distributions are close enough to approximate each other reasonably well using importance weights ; in our numerical experiments , we took .we obtain the associated sequence by setting where is a unique solution of .since is continuous and monotonously decreasing in we can use bi - sectional search to solve . in this section ,we discuss how to condition the weighted system and improve its quality before we proceed with the next weighting step .we replace the system targeting by a selection of particles drawn from the current particle reservoir such that where denotes the number of particles identical with .thus , in the resampled system particles with small weights have vanished while particles with large weights have been multiplied . for the implementation of the resampling step ,there exist several recipes .we could apply a multinomial resampling which is straightforward .there are , however , more efficient ways like residual , stratified and systematic resampling .we use the latest in our simulations , see procedure [ algo : resample ] . + * sample * } ] normalized such that . with probability ,the kernel proposes a uniform draw from the -neighborhood of , we refer to this type of kernel as _ symmetric kernel _ since and equation simplifies .this class of kernels provide a higher mutation rate than the random - scan gibbs kernel ( see for adiscussion ) .locally operating transition kernels of the symmetric type are known to be slowly mixing .if we put most weight on small values of , the kernel only changes one or a few entries in each step .if we put more weight on larger values of , the proposals will hardly ever be accepted if the invariant distribution is multi - modal .ideally , we want the particles sampled from the transition kernel to be nearly independent after a few move steps which is often hard to achieve using local transition kernels . for the sequential monte carlo algorithm, we use _ adaptive independent kernels _ which have proposal distributions of the kind which do not depend on the current state but have a parameter which we adapt during the course of the algorithm .the adaptive independent kernel is rapidly mixing if we can fit the _parametric family _ such that the proposal distribution is sufficiently close to the target distribution , yielding thus , on average , high acceptance rates . the general idea behind this approachis to take the information gathered in the current particle approximation into account ( see e.g. ) .the usefulness of this strategy for sampling on binary spaces has been illustrated by .we fit a parameter to the particle approximation of according to some suitable criterion .precisely , is taken to be the maximum likelihood or method of moments estimator applied to the weighted sample .the choice of the parametric family is crucial to the implementation of a sequential monte carlo sampler with adaptive independent kernel .we discuss this issue in detail in section [ sec : pfbs ] .adaptation could , to a certain extent , also be done for local transition kernels . propose an adaptive kernel which replaces the full conditional distribution of the gibbs sampler by an easy to compute linear approximation which is estimated from the sampled particles .this method accelerates gibbs sampling if the target distribution is hard to evaluate but does not provide fast mixing like the adaptive independent kernel ( see for a comparison ) .still , the use of local kernels in the context of the proposed sequential monte carlo algorithm might be favorable if , for instance , the structure of the problem allows to rapidly compute the acceptance probabilities of local moves .further , batches of local moves can be alternated with independent proposals to ensure that the algorithm explores the neighborhood of local modes sufficiently well .since the sample space is discrete , a given particle is not necessarily unique .this raises the question whether it is sensible to store multiple copies of the same weighted particle in our system . in the sequel, we discuss some more details concerning this issue which has only been touched upon briefly by .let denote the number of copies of the particle in the system .indeed , for parsimonious reasons , we could just keep a single representative of and aggregate the associated weights to . shifting weights between identical particlesdoes not affect the nature of the particle approximation but it obviously changes the effective sample size which is undesirable since we introduced the effective sample size as a criterion to measure the goodness of a particle approximation . from an aggregated particle system , we can not distinguish the weight disparity induced by reweighting according to the importance function and the weight disparity induced by multiple sampling of the same states which occurs if the mass of the target distribution is concentrated .more precisely , we can not tell whether the effective sample size is actually due to the gap between and or the presence of particle copies due to the mass of concentrating on a small proportion of the state space which occurs by construction of the auxiliary distribution in section [ sec : stat model ] .aggregating the weights means that the number of particles is not fixed at runtime . in this case , the straightforward way to implement the move step presented in section [ sec : move ] is breaking up the particles into multiple copies corresponding to their weights and moving them separately .but instead of permanently splitting and pooling the weights it seems more efficient to just keep the multiple copies .we could , however , design a different kind of resample - move algorithm which first augments the number of particles in the move step and then resamples exactly weighted particles from this extended system using a variant of the resampling procedure proposed by . a simple way to augment the number of particles is sampling and reweighting via where denotes the acceptance probability of the metropolis - hastings kernel .we tested this variant but could not see any advantage over the standard sampler presented in the preceding sections .for the augment - resample type algorithm the implementation is more involved and the computational burden significantly higher . in particular , the rao - blackwellization effect one might achieve when replacingthe accept - reject steps of the transition kernel by a single resampling step does not seem to justify the extra computational effort . indeed ,aggregating the weights does not only prevent us from using the effective sample size criterion , but also requires extra computational time of in each iteration of the move step since pooling the weights is as complex as sorting . with our application in mind ,however , computational time is more critical than memory , and we therefore recommend to refrain from aggregating the weights .[ sec : pfbs ] we review three parametric families on . in contrast to the similar discussion in , we also consider a parametric family which can not be used in sequential monte carlo samplers but in the context of the cross - entropy method . for more details on parametric families on binary spaceswe refer to .we frame some properties making a parametric family suitable as proposal distribution in sequential monte carlo algorithms .a. for reasons of parsimony , we prefer a family of distributions with at most parameters like the multivariate normal .b. given a sample from the target distribution , we need to estimate in a reasonable amount of computational time .c. we need to generate samples from the family .we need the rows of to be independent .d. for the sequential monte carlo algorithm , we need to evaluate point - wise .however , the cross - entropy method still works without this requirement .e. we want the calibrated family to reproduce e.g. the marginals and covariance structure of to ensure that the parametric family is sufficiently close to . the simplest non - trivial distributions on certainly those having independent components . for a vector of marginal probabilities, we define the _ product family _we check the requirement list from section [ sec : properties ] : ( a ) the product family is parsimonious with .( b ) the maximum likelihood estimator is the weighted sample mean .( c ) we can easily sample .( d ) we can easily evaluate the mass function .( e ) however , the product family does not reproduce any dependencies we might observe in . the last point is the crucial weakness which makes the product family impractical for particle optimization algorithms on strongly multi - modal problems .consequently , the rest of this section deals with ideas on how to sample binary vectors with a given dependence structure .there are , to our knowledge , two major strategies to this end . 1 . we construct a generalized linear model which permits to compute the conditional distributions .we apply the chain rule and write as which allows to sample the entries of a random vector component - wise .we sample from an auxiliary distribution and map the samples into .we call a copula family , although we refrain from working with explicit uniform marginals . we first present a generalized linear model and then review a copula approach . even for rather simple non - linear modelswe usually can not derive closed - form expressions for the marginal probabilities required for sampling according to .therefore , we might directly construct a parametric family from its conditional probabilities .we define , for a lower triangular matrix , the _logistic conditionals family _ as ^{1-\gamma_i}\end{aligned}\ ] ] where ^{-1} ] .we refer to the online supplement for instructions on how to fit the parameters and .we check the requirement list from section [ sec : properties ] : ( a ) the gaussian copula family is sufficiently parsimonious with .( b ) we can fit the parameters and via method of moments . however , the parameter is not always positive definite .( c ) we can sample using with .( d ) we can not easily evaluate since this requires computing high - dimensional integral expressions which is a computationally challenging problem in itself ( see e.g. ) .the gaussian copula family is therefore less useful for sequential monte carlo samplers but can be incorporated into the cross - entropy method reviewed in section [ sec : cross entropy ] .( e ) the family reproduces the exact mean and , possibly scaled , correlation structure .we briefly discuss a toy example to illustrate the usefulness of the parametric families . for the quadratic function the associated probability mass function has a correlation matrix which indicates that this distribution has considerable dependencies and its mass function is therefore strongly multi - modal .we generate pseudo - random data from , adjust the parametric families to the data and plot the mass functions of the fitted parametric families .figure [ fig : toy exa ] shows how the three parametric families cope with reproducing the true mass function .clearly , the product family is not close enough to the true mass function to yield a suitable instrumental distribution while the logistic conditional family almost copies the characteristics of and the gaussian copula family allows for an intermediate goodness of fit .in this section , we provide a synopsis of all steps involved in the sequential monte carlo algorithm and connect this framework to the cross - entropy method and simulated annealing . in table[ tab : seq ] , we state the necessary formulas for the tempered and the level set sequence introduced in section [ sec : stat model ] . for convenience , we summarize the complete sequential monte carlo sampler in algorithm [ algo : smc ] . note that , in practice , the sequence is not indexed by but rather by , which means that the counter is only given implicitly .the algorithm terminates if the particle diversity sharply drops below some threshold which indicates that the mass has concentrated in a single mode .if we use a kernel with proposals from a parametric family , we might already stop if the family degenerates in the sense that only a few components of , say less than , are random while the others are constant ones or zeros .in this situation , additional moves using a parametric family are a pointless effort . we either return the maximizer within the particle system or we solve the subproblem of dimension by brute force enumeration .we might also perform some final local moves in order to further explore the regions of the state space the particles concentrated on .* sample * * for all * .+ , + for the level set sequence , the effective sample size is the fraction of the particles which have an objective function value greater than ; see table [ tab : seq ] and equation .the remaining particles are discarded since their weights equal zero .consequently , there is no need to explicitly compute as a solution of .we simply order the particles according to their objective values and only keep the particles with the highest objective values .rubinstein , who popularizes the use of level set sequences in the context of the cross - entropy method , refers to as the size of the _ elite sample_. the cross - entropy method has been applied successfully to a variety of combinatorial optimization problems , some of which are equivalent to pseudo - boolean optimization , and is closely related to the proposed sequential monte carlo framework . however , the central difference between the cross - entropy method and the sequential monte carlo algorithm outlined above is the use of the invariant transition kernel in the latter .we obtain the cross - entropy method as a special case if we replace the kernel by its proposal distribution .the sequential monte carlo approach uses a smooth family of distributions and explicitly schedules the evolution which in turn leads to the proposal distributions .the cross - entropy method , in contrast , defines the subsequent proposal distribution without any reference sequence to balance the speed of the particle evolution . in order to decelerate the advancement of the cross - entropy method, we introduce a lag parameter and use a convex combination of the previous parameter and the parameter fit to the current particle system , setting however , there are no guidelines on how to adjust the lag parameter during the run of the algorithm .therefore , the sequential monte carlo algorithm is easier to calibrate since the reference sequence controls the stride and automatically prevents the system from overshooting . on the upside, the cross - entropy method allows for a broader class of auxiliary distributions since we do not need to evaluate point - wise which is necessary in the computation of the acceptance probability of the hastings kernel ; see section [ sec : gaussian copula ] ..formulas for optimization sequences [ cols="<,^,^ " , ] a well - studied approach to pseudo - boolean optimization is simulated annealing . while the name stems from the analogy to the annealing process in metallurgy , there is a pure statistical meaning to this setup .we can picture simulated annealing as approximating the mode of a tempered sequence using a single particle .since a single observation does not allow for fitting a parametric family , we have to rely on symmetric transition kernels in the move step .a crucial choice is the sequence which in this context is often referred to as the _ cooling schedule_. there is a vast literature advising on how to calibrate where a typical guideline is the expected acceptance rate of the hastings kernel .we calibrate such that the empirical acceptance rate follows approximately for \v x\in\b^d \vx\in\b^{d(d+1)/2} \left .\begin{array}{l } \hspace{-1ex}x_{ij}\leq x_{ii } \\\hspace{-1ex}x_{ij}\leq x_{jj } \\\hspace{-1ex}x_{ij}\geq x_{ii } + x_{jj } -1 \\\end{array } \right\}\text { for all}\ i , j\in \dset{1,d} ] . since /\partial\v a=\logistic(\v x\t\v a)\v x ] ,the hessian matrix of the log - likelihood is \right]\vz_{k \bullet}^{(i)}(\v z_{k \bullet}^{(i)})\t = -(\m z^{(i)})\t \diag(\v w ) \diag(\v q^{(i)}_{\v a } ) \m z^{(i)},\end{aligned}\ ] ] where ] .while the parameter is easy to assess , the challenging task is to compute the bivariate variances for all .recall the standard result [ , p.255 ] where denotes the density of the bivariate normal distribution with correlation coefficient .we obtain the following newton - raphson iteration starting at some initial value ; see procedure [ algo : fit gaussian ] . in the sequential monte carlo context, good initial values are obtained from the parameters of the previous auxiliary distributions .we use a fast series approximation to evaluate .these approximations are critical when comes very close to either boundary of $ ] .the newton iteration might repeatedly fail when restarted at the corresponding boundary . in any event, is strictly monotonic in since its derivative is positive , and we can switch to bi - sectional search if necessary .the locally fitted correlation matrices might not be positive definite for because the gaussian copula is not flexible enough to model the full range of cross - moments binary distributions might have ( see for an extended discussion ) .we obtain a feasible parameter replacing by where is smaller than all eigenvalues of the locally fitted matrix .this approach evenly lowers the local correlations to a feasible level and is easy to implement on standard software . in practice, we also prefer to work with a sparse version of the gaussian copula family that concentrates on strong dependencies and sets minor correlations to zero . | we discuss a unified approach to stochastic optimization of pseudo - boolean objective functions based on particle methods , including the cross - entropy method and simulated annealing as special cases . we point out the need for auxiliary sampling distributions , that is parametric families on binary spaces , which are able to reproduce complex dependency structures , and illustrate their usefulness in our numerical experiments . we provide numerical evidence that particle - driven optimization algorithms based on parametric families yield superior results on strongly multi - modal optimization problems while local search heuristics outperform them on easier problems . |
it is not clear when the mantra `` room temperature superconductivity '' first passed the lips of superconductivity researchers , but it has attained some prominence in the literature since little used it in the title ( `` superconductivity at room temperature '' ) of an overview in 1965 of his electronic polarization mechanism proposed for polymeric systems .ginzburg immediately transitioned this to two - dimensional metal - insulator sandwich materials and generalized the description. sad to say , these electronic - mediated superconductors are yet to be realized .uses of the term _ room temperature superconductivity _ have ranged from general speculations to glimpses of `` glitchite'' to editorial coverage of public announcements to preplanning for applications. in the ensuing decades vitaly ginzburg has been the most visible and persistent advocate that there is no theoretical justification for pessimism .ginzburg was elaborating more generally on `` high temperature superconductivity '' by 1967 and his optimism has never flagged .hts ( cuprate high temperature superconductors ) set the standard with t 90 k in 1987 and rising to k by the mid-1990s , but there have been no further increases . since there is no viable theory of the mechanism of hts , there is no approach that will allow the rational design of new materials with higher in that class .the phonon mechanism however has an accurate microscopic theory that is valid to reasonably strong coupling , and therefore invites the design of materials with higher t .the recent and very surprising breakthroughs in phonon - mediated superconductivity can be put to use to provide a plausible recipe for the design of a room temperature superconductor .this is the purpose of this modest proposal , where a central feature is the emphasis on _ control _ of the coupling , versus the standard approach of increasing the _ brute strength _ of the coupling .nearing the end of the 1970s there was pessimism about raising the superconducting transition temperature ( t ) significantly , and there were claims ( mostly unpublished ) that the maximum t was around 30 k. the only superconductivity known at the time ( excluding the superfluidity of which was unlike anything seen in metals ) was symmetric ( -wave ) pairing mediated by vibrations of the atoms ( phonons ) .the maximum t known at the time was 23 k in the a15 system .the maximum t had increased by only 6 k in 25 years , and it was not due to increase for several more years ( 1986 , when hts was announced ) .the discovery in 2001 of mgb , with t=40 k , was stunning not only because of the unimaginably high t for phonon - mediated superconductivity , but also because it appeared in a completely wrong kind of material according to the accumulated knowledge .this aspect has been dealt with in some detail now , and the pairing mechanism , character , and strength is well understood . while there has been some effort to apply what has been learned from mgb to find other similar , or even better , superconductors ,so far mgb remains in a class by itself .diamond , when it is heavily doped with b , becomes superconducting with reports as high as 12 k. b - doped diamond has features in common with mgb except for the two - dimensionality ( 2d ) of mgb s electronic structure .it is this 2d character we deal with here , one important aspect of which has not been emphasized previously .we confine our consideration to ambient pressure ; applied pressure could well provide enhancements .migdal - eliashberg ( me ) theory of superconductivity is firmly grounded on the material - specific level and is one of the impressive successes of condensed matter theory . stripped to its basics , ( phonon - coupled )t depends on the characteristic frequency of the phonons and on the strength of coupling ; in the strong coupling regime of interest here the retarded coulomb interaction loses relevance .the essence of increasing t lies in making one or both of these characteristic constants larger while avoiding structural instability .the focus here is on the decomposition of where the mode- for momentum is given for circular fermi surfaces in two dimensions by ^{-2 } \sum_k \delta(\varepsilon_k)\delta(\varepsilon_{k+q } ) = \frac{1}{\eta_q\sqrt{1-\eta_q^2 } } \label{xiq}\end{aligned}\ ] ] where .here is the fermi level ( e=0 ) density of states , is the squared electron - displaced ion matrix element averaged over the fermi surface , is the atomic mass , and is the characteristic physical phonon frequency .the el - ph matrix element involves , , and in the standard way . here the band degeneracy factor ( number of fermi surfaces ) is treated as in mgb , where the two surfaces are equally important . in the conventional adiabatic approximation ,phonon renormalization is given by the real part of the phonon self - energy at zero frequency , \eta_q > 1 .\label{chiq}\end{aligned}\ ] ] both here and above the final expression has been expressed for mgb-like systems with cylindrical fermi surfaces. the behavior of this renormalization is pictured in fig .[ renorm ] , note specifically the independence of the degree of softening on .it is the last result eq .[ chiq ] that we emphasize here : the phonon renormalization is constant ( _ i.e. _ controlled ) for , and diminishes quickly for larger . for circular fermi surfaces there is no sharp peak in at nor anywhere else . in mgb ,phonon coupling arises dominantly from , and only the two bond - stretching branches have the very large matrix elements .the number of phonon scattering processes from / to the fermi surface is quantified by the `` nesting function '' ( given here in normalized form whose zone sum is unity ) . for the 2d circular dispersion relation ,it is the simple known quantity in eq .it has simple integrable divergences at =0 and which , most importantly , do not result in -dependent softening , hence avoiding instability in spite of contributing positively to .this behavior can be contrasted with the vast uncertainty inherent in a general fermi surface , which has arisen forcefully in recent discoveries . under pressureelemental li metal becomes superconducting up to near 20 k as shown by three groups, in spite of being an metal with a simple fermi surface. in fig .[ xiq3d ] is displayed in three planes in the zone , where extremely sharp structure is apparent .the sharp structure near the k symmetry point leads to a lattice instability ( large contribution to ) .the sharp structure occurs in spite of a very simple cu - like fermi surface consisting of spheres joined by necks along directions .this example demonstrates why _ control _ of the -dependence of the coupling is essential ; is not so large for li at this volume ( ) yet the lattice has been pushed to instability .this example shows that the overall value of is not the indicator of instability of the system , which occurs when some phonon frequency is renormalized ( softened ) to zero .the crucial features to be learned from mgb and li under pressure follow .high frequency is important ; this has long been understood . by beginning from a very stiff unrenormalized lattice ,a crystal can withstand a great deal of renormalization to lower frequencies that must accompany strong coupling .+ 2 . very highmode s ( up to 20 - 25 for mgb ) can arise without instability .mgb appears not to be near any instability , although analysis shows that only - 20% stronger coupling would result in instability .the phonon softening in 2d systems due to strong coupling , and the total , is independent of carrier concentration ( or varies smoothly if the effective mass changes with doping ) .impressive results can be attained from strong coupling to only a fraction of the modes . for mgb , t = 40 k withonly 3% of the modes strongly coupled .these modes are the bond - stretch modes ( 2 out of 9 branches ) with ( 12% of the zone ) .general - shaped fermi surfaces , even the simple one of li , can readily lead to lattice instability from a thin surface of soft modes in q - space .such instability restricts the achievement of high t .two - dimensional parabolic bands provide _ ideal control _ of the q - dependence of coupling strength : phonon renormalization is _ constant _ for , not sharply peaking in an unexpected region of the zone as can happen for a fermi surface of general shape ( viz .for li in fig .[ xiq3d ] ) .the superconducting t of mgb is remarkable , and arises from the extreme strong coupling of 3% of its phonons .the electron - phonon deformation potential is very large for the bond - stretch modes , and mgb gains a factor =4 from having two fermi surfaces .figure [ hex1 ] illustrates ( 1 ) the -band fermi surfaces , idealized to identical circular surfaces , and ( 2 ) the kohn - anomaly region of renormalized phonons . in mgb `` kohn surface '' encloses only 12% of the zone .previous analysis has shown that pushing these phonons further , by either increasing the bare coupling strength or varying the lattice stiffness ( bare phonon frequency ) separately , could increase t by perhaps 20% but then would drive the material to a lattice instability .it is fairly obvious that if one increases both proportionately , then one gains by enhancing the characteristic frequency while keeping constant .this is the _ metallic hydrogen _scenario that was discussed originally 40 years ago. there is however the equally obvious approach : introduce more phonons into the coupling . from this viewpoint ,mgb with its 3% of useful phonons seems pathetic , yet as noted in sec .iii , increasing the number of strongly coupled phonons be increasing the doping does not enhance because the individual mode s decrease accordingly . and further increasing the matrix elements , which increases , also readily leads to instability .the key to increasing is not in doing more ( more carriers , hence more phonons ) , but in doing it _again _ and _ again _ , with other fermi surfaces and other phonons . here the _, not over the strength of the coupling but over the _ specific momenta _ [ that is , , or more specifically ] becomes essential .the requirement is to renormalize frequencies in other parts of the zone _ without further softening _ the ones that are already very strongly renormalized , since that will not lead to lattice instability .two - dimensional circularly - symmetric dispersion relations provide that control , as demonstrated by the equations above .adding an additional circular fermi surface centered at gives another region of renormalized phonons , and contributions to , of radius centered at .let us skip directly to the optimal case , illustrated in fig .[ hex19 ] . by adding to the mgb zone - centered surface another spherical fermi surface half - way along the -k line , with radius 1/8 of the length of the -k line , one obtains three new spanning vectors and their symmetry partners that produces an array of close - packed kohn surfaces of radius within which coupling is strong .given the close - packing fraction in 2d , =0.907 , this arrangement manages to utilize 90% of the zone for strongly coupled phonons , an increase in fraction of zone used , compared to mgb ( 12% ) , by a factor of 7.5 . this extension from mgb not yet optimum , because mgb uses only 2/9 of its branches .optimally , every branch would be drafted into service in strong coupling , giving another factor of 4.5 , or a total enhancement of - 35 .the strongly coupled modes in mgb have mode- s of mean value 20 - 25 ( calculations so far have not been precise enough to pin this down ) .let us not be pessimistic , and therefore use =25 ; this value is consistent with the value of from the strongly coupled modes divided by 3% , i.e. 0.7/0.03 . then with 90% participation of the phonons =25=22.5 . using the allen - dynes equation to account properly for the strong - coupling limit that is being approached , and using the mgb frequency of 60 mev, one obtains t=430 k. the strong coupling limit of the ratio is 13 , so we can estimate the gap of such a superconductor to be ev .this will be an interesting superconductor indeed . whereas the architectural design of a room temperature superconductor provided hereis straightforward , the structural engineering necessary to implement this vision will require creativity and knowledge of materials . where does one look ?figure [ hex19 ] gives an idea of a set of fermi surfaces that could be promising .this pattern is very much like that of na , although its t=4.5 k , and even that not necessarily from phonons , is not a good example of strong coupling enhanced by fermi surface arrangement .( the zone - centered fermi surface in na is much larger than the others . )the encouraging feature to be emphasized here is that this example shows that the desirable arrangement of figure [ hex19 ] is nothing unusual .not only are the fermi surfaces very near the midpoint of the -k line , but they are nearly circular although not enforced by symmetry .a more provocative example is that of the layered , electron doped system li , which has up to t=15 k for m = zr , and t=26 k for m = hf . for which probably includes the accessible concentrations, this system has very nearly circular k - centered fermi surfaces whose nesting possibilities have been noted. there are surfaces at two inequivalent points k , resulting in kohn regions centered at the zone center ( degeneracy two ) and [ because k scattering involves momentum transfer k in the hexagonal zone ] at each of the pair of points k ( also degeneracy two ) .the figure of kohn surfaces would look like fig .[ hex19 ] with circles centered at the and points missing .indeed , heid and bohnen calculate phonon softening around k and the zone center , but alas they find that the calculated coupling strength comes well short of accounting for t=15 k in li ( not to mention the question of t=26 k in li ) .the superconductivity in this system remains unexplained , but the placement of its 2d circular fermi surfaces illustrates the first step in pulling more phonons into the coupling . while the electronic system we seek is 2d in character ,we also seek ( strong ) coupling to phonon of arbitrary momentum and polarization .mgb is a bad example here , since the strong bonding lies solely within the layer ; the -axis modes are hardly coupled .a lattice with strong bonds with substantial component perpendicular to the layers have a better likelihood of coupling to bond stretch modes . from this point of viewthe li system seems to be a step in the right direction , as it has zr - n bonds both within the layer and perpendicular to it .heid and bohnen found moderate coupling both to two stiff modes ( mev , primarily n ) and to two softer modes ( mev ) .these modes are however polarized in - plane ; the perpendicular zr - n modes do not couple to the 2d electronic system .creativity will be required to find how to involve a larger fraction of modes in the coupling .since there may be people who are truly interested in this topic of room temperature superconductivity , the basic premises and concepts should be made as clear and precise as possible ( `` truth in advertising '' ) .the following issues should be mentioned .the use of the term _ optimal _ as used above was disingenuous and incorrect , and that discussion as it stands is unduly pessimistic .close - packing of -diameter kohn circular surfaces does indeed use the area of the zone efficiently ( although mathematicians or engineers would fill the holes with smaller circles , and then again and again ad infinitum , thereby using 100% of the area ) .however , maximizing the area is _ not _ the objective .the objective is to maximize t or to simplify slightly for now , to maximize .phase space in 2d leads to the non - intuitive result that integration of over a kohn region of diameter is independent of , as long as the me theory we are applying holds .the objective then is to maximize the _ number _ of kohn regions .since there are likely to be different values of and for the different regions , it will be something like the quantity }{\omega_j } \label{optimum}\end{aligned}\ ] ] that needs maximizing , where is the ( variable ) number of kohn regions .care must be taken to keep the kohn regions from overlapping if the combined renormalization of phonons will be so strong as to drive instability .also , the band degeneracy factor seems to be extremely helpful , but must be watched .[ optimum ] there would be some factor related to of identical terms . ) for example , in fig .[ hex19 ] each of the fermi surfaces contributes ( via intrasurface scattering ) to the coupling strength and accompanying phonon softening inside the kohn region at , a factor of 12 in this case .a good strategy would be to have matrix elements be larger for , which have smaller degeneracy factors , than for with its large multiplier , thereby crafting strong coupling while avoiding the instability .the validity of the theory bears reconsideration .it is safe to say that if for every phonon , then me ( second order perturbation ) theory is safe . if this inequality is not satisfied for a small fraction of the coupling strength , corrections are probably minor .however , for the typical strongly - coupled phonon in mgb , the condition of validity is violated badly for _ every important phonon _ , making me theory in mgb unjustified as pointed out earlier. nevertheless , its predictions seem to be reasonable for mgb , so it is reasonable for us to extrapolate the theory to include more phonons with similar strength of coupling .it was noted above that for a frequency 60 mev as in mgb , 20 is required to reach the vicinity of room temperature .while there have been many papers addressing the very strong regime and resulting polarons and bipolarons , we are not aware of any that address seriously such coupling in a degenerate electron system .there is a general expectation that the electron system becomes unstable , but unstable to what is unclear ; a degenerate gas of polarons ( of the order of one per atom ) does not seem like a clear concept .( if the instability occurs only below t , it may not control or limit the pairing at high temperature . )the description of such a system is still unknown ; while mgb has a finite concentration ( 3% ) of _ extremely _ strongly coupled phonons , the net value of is less than unity . followingthe materials design proposed here will lead to study of a new materials regime as well as higher t .a blueprint for the design of a room temperature superconductor has been provided here .the essence is this : very strong q - dependent coupling in a _ controlled _ fashion is possible , and one follows the path toward getting as much as one can out of both the electronic and the phononic systems .two - dimension electronic structures with circular fermi surfaces give an unsurpassed level of control of the q - dependence , which if uncontrolled can and does lead to structural instabilities even at moderate total coupling strength .stiff lattices are important , as is getting as many of the branches as possible involved in coupling . on the one hand mgb ,impressive as it is , seems to be doing a poor job in most respects of making use of the available phase space .mgb uses only 12% of all the possible phonon momenta , and only 2/9 of its phonon branches , netting only 3% of phonons involved in coupling .mgb excels at producing extremely large electron - displaced ion matrix elements , and its 2d phase space keeps the extremely large coupling ( to those 3% ) firmly under control .all things considered , it seems reasonable to expect that materials exist , or can be made , that will improve on mgb s current record .there is no promise here , nor even expectation , that such improvement will be easy . although one can employ tight - binding models to suggest crystal structures and interatomic interactions that will place band extrema in the desired positions in the zone , synthesizing the corresponding material is more uncertain .we work with discrete nuclear charges , so this is a diophantine problem rather than a continuous one , and bonding properties can change rapidly from atom to atom .in addition , incremental improvements lead to even more incremental payoffs .the behavior of t( ) in the strong - coupling regime , while monotonically increasing , provides a law of diminishing returns : doubling t requires quadrupling , a sobering prospect .finally , it should re - emphasized that no realistic theory of the degenerate electronic system in the presence of phonon coupling in the regime ( say ) exists , but this is another , perhaps better , physical reason to push to stronger coupling systems . and think of it would nt it be really great to carry around a spool of superconducting d wire in your pocket ?this work was stimulated in part by preparation for , and attendance of , the workshop on _ the possibility of room temperature superconductivity _, held june 2005 at the university of notre dame .i have benefited from discussion and collaboration with numerous workers in the field .our recent work in this area has been supported by national science foundation grant dmr-0421810 .support from the alexander von humboldt foundation during the latter part of this work is gratefully acknowledged .10 w. a. little , scientific american * 212 * , 21 ( 1965 ) . v. l. ginzburg , phys .* 13 * , 101 ( 1964 ) ; sov .jetp * 19 * , 269 ( 1964 ) . _ on the problem of high temperature superconductivity _ j. ladik and a. bierman , phys .a * 29 * , 636 ( 1969 ) ._ on the possibility of room - temperature superconductivity in double stranded dna _k. antonowicz , nature * 247 * , 358 ( 1974 ) ._ possible superconductivity at room temperature _ j. langer , solid state commun . * 26 * , 839 ( 1978 ) . _ unusual properties of aniline black does superconductivity exist at room - temperature ? _k. s. jayaraman , nature * 327 * , 357 ( 1987 ) ._ superconductivity at room - temperature _ c. k. n. patel and r. c. dynes , proc .sci . * 85 * , 4945 ( 1988 ) ._ towards room - temperature superconductivity_ v. l. ginzburg and d. a. kirzhnitz , doklady akad .sssr * 176 * , 553 ( 1967 ) ._ on high - temperature and surface superconductivity_ j.m .an and w.e .pickett , phys .lett . * 86 * , 4366 ( 2001 ) y. kong , o. v. dolgov , o. jepsen , and o. k. andersen , phys .b * 64 * , 020501 ( 2001 ) .j. kortus , i. i. mazin , k. d. belashchenko , v. p. antropov , and l. l. boyer , phys .lett . * 86 * , 4656 ( 2001 ) . i. i. mazin and v. p. antropov , physica c * 385 * , 49 ( 2003 ) .e. a. ekimov , v. a. sidorov , e. d. bauer , n. n. melnik , n. j. curro , j. d. thompson , and s. m. stishov , nature * 428 * , 542 ( 2004 ) .y. takano , m. nagao , k. kobayashi , h. umezawa , i. sakaguchi , m. tachiki , t. hatano , and h. kawarada , appl .lett . * 85 * , 2581 ( 2004 ) .l. boeri , j. kortus , and o. k. anderson , phys .* 93 * , 237002 ( 2004 ) .k .- w . lee and w. e. pickett , phys .lett . * 93 * , 237003 ( 2004 ) .x. blase , ch .adessi , and d. conntable , phys .lett . * 93 * , 237004 ( 2004 ) .h. j. xiang , z. li , j. yang , j. g. hou , and q. zhu , phys .b * 70 * , 212504 ( 2004 ) . y. ma , j. s. tse , t. cui , d. d. klug , l. zhang , y. xie , y. niu , and g. zou , phys .b * 72 * , 014306 ( 2005 ). p. b. allen , phys .b * 6 * , 2577 ( 1972 ) ; p. b. allen and m. l. cohen , phys . rev* 29 * , 1593 ( 1972 ) .a numerical correction is given in eq .( 4.27 ) of p. b. allen , in _ dynamical properties of solids _ , ch .2 , edited by g. k. horton and a. a. maradudin ( north - holland , amsterdam , 1980 ) .w. e. pickett , j. m. an , h. rosner , and s. y. savrasov , physica c * 387 * , 117 ( 2003 ) .j. m. an , s. y. savrasov , h. rosner , and w. e. pickett , phys . rev .b * 66 * , 220502 ( 2002 ) .i. i. mazin , private communication .k. shimizu , h. kimura , d. takao , and k. amaya , nature * 419 * , 597 ( 2002 ) . v.v. struzhkin , m. i. eremets , w. gan , h .- k .mao , and r. j. hemley , science * 298 * , 1213 ( 2002 ) . s. deemyad and j. s. schilling , phys .lett . * 91 * , 167001 ( 2003 ) .d. kasinathan , j. kune , a. lazicki , h. rosner , c. s. yoo , r. t. scalettar , and w. e. pickett , phys .lett . * 96 * , 047004 ( 2006 ) .g. profeta , c. franchini , n. n. lathiotakis , a. floris , a. sanna , m. a. l. marques , m. lueders , s. massidda , e. k. u. gross , and a. continenza , phys .* 96 * , 047003 ( 2006 ) .w. e. pickett , braz .* 33 * , 695 ( 2003 ) .l. boeri , g.b .bachelet , e. cappelluti and l. pietronero , phys .b * 65 * , 214501 ( 2002 ) .l. boeri , e. cappelluti and l. pietronero , phys .b * 71 * , 012501 ( 2005 ) . | the vision of `` room temperature superconductivity '' has appeared intermittently but prominently in the literature since 1964 , when w. a. little and v. l. ginzburg began working on the _ problem of high temperature superconductivity _ around the same time . since that time the prospects for room temperature superconductivity have varied from gloom ( around 1980 ) to glee ( the years immediately after the discovery of hts ) , to wait - and - see ( the current feeling ) . recent discoveries have clarified old issues , making it possible to construct the blueprint for a viable room temperature superconductor . |
random walks are widely used in physics so as to model the features of transport processes where the migrating ( possibly massless ) particle undergoes a series of random displacements as the effect of repeated collisions with the surrounding environment .while much attention has been given to random walks on regular euclidean lattices , and to the corresponding scaling limits , less has been comparatively devoted to the case where the direction of propagation can change continuously at each collision : for an historical review , see , e.g. , .such processes , which are intimately connected to the boltzmann equation , have been named _ random flights _ , and play a prominent role in the description of , among others , neutron or photon propagation through matter , chemical and biological species migration , or electron motion in semiconductors . within the simplest formulation of this model , which was originally proposed by pearson ( 1905 ) and later extended by kluyver ( 1906 ) and rayleigh ( 1919 ) , it is assumed that particles perform random displacements ( ` flights ' ) along straight lines , and that at the end of each flight ( a ` collision ' with the surrounding medium ) the direction of propagation changes at random .when the number of transported particles is much smaller than the number of the particles of the interacting medium , so that inter - particles collisions can be safely neglected , it is reasonable to assume that the probability of interacting with the medium is poissonian . for the case of neutrons in a nuclear reactor ,e.g. , the ratio between the number of transported particles and the number of interacting nuclei in a typical fuel configuration is of the order of , even for high flux reactors .it follows that flight lengths between subsequent collisions are exponentially distributed ( hence we will call this process _ exponential flights _ in the following ) .we assume that collisions can be either of scattering or absorption type . at each scattering collision , the flight direction changes at random , whereas at absorption events the particle disappears and the flight terminates .each flight can be seen as a random walk in the phase space of position and direction in a -dimensional setup .the particle density represents the probability density of finding a transported particle at position having direction at a time , up to an appropriate normalization factor . in many applications ,the actual physical observable is the average of the density over the directions , namely where is a normalization factor corresponding to the surface of the unit -dimensional sphere and is the gamma function .along with the development of monte carlo methods , numerical solutions to complex three - dimensional linear / nonlinear transport problems coming from applied sciences are becoming accessible to an high degree of accuracy : criticality calculations in reactor cores , scattering and absorption in heated plasmas , propagation through anisotropic scattering centers in atmosphere or fluids , and charge transport in semiconductors under external fields , only to name a few .nonetheless , even for the simplest systems , many theoretical questions remain without an answer , so that the study of exponential flights has attracted a renovated interest in recent years ; see , e.g. , . in particular , it has been emphasized that the dimension deeply affects the nature of , and prevents in most cases from obtaining explicit results . the aim of our work is to investigate exponential flights in a generic -dimensional setup , under simplifying hypotheses . here, we will mostly focus on establishing insightful relationships between space , time and the statistics of particle collisions within a given volume .a number of new results will be derived , concerning unbounded , bounded , scattering as well as absorbing domains .this paper is structured as follows . in sec .[ setup ] we recall the mathematical formalism , introduce the physical variables and derive their inter - dependence for any . in sec .[ space_moments ] we detail the structure of the spatial moments of the particle ensemble . sec .[ coll_density_sec ] is devoted to the analysis of the collisions statistics in a given domain .then , in sec . [ dimensional_analysis ] we examine the distinct cases and .both the spatial and temporal evolution of the particle ensemble are considered , and results for bounded domains are obtained by resorting to the method of images .we provide a comparison between analytical ( or asymptotic ) findings and monte carlo simulations .conclusions will be finally drawn in sec .[ conclusions ] .within the natural framework of statistical mechanics , the evolution of the particle density for exponential flights is governed by the linear boltzmann equation .linearity stems from neglecting inter - particle collisions . in the hypothesis that an average particle energy can be defined ( the so called one - speed transport ) , and that the physical properties of the medium do not depend on position nor time , the boltzmann equation for the density reads where is total cross section of the traversed medium ( carrying the units of an inverse length ) , is the scattering cross section , is the particle speed , and is the source .the total cross section is such that represents the average flight length between consecutive collisions ( the so - called mean free path ) , and is related to the scattering cross section and to the absorption cross section by .the quantity is the scattering kernel , i.e. , the probability density that at each scattering collision the random direction changes from to .= 9.0 cm denoting by the solution of the boltzmann equation for a medium without absorptions ( ) , the complete solution with absorption can be easily obtained by letting thanks to linearity .this allows primarily addressing a purely scattering medium ( ) , without loss of generality . at long times , i.e., far from the source , the direction - averaged particle density is known to converge to a gaussian shape , namely , the quantity playing the role of a diffusion coefficient . however , eq . is approximately valid for , and can not capture the particle evolution at early times , nor the finite - speed propagation effects . indeed , diffusion implicitly assumes a non - vanishing probability of finding the particles at arbitrary distance from the source .deviations from the limit gaussian behavior are well known , e.g. , for neutron as well electron transport . in the following ,we outline the relation between and the underlying exponential flight process .consider a -dimensional setup .a particle , originally located at position in a given domain , travels along straight lines at constant speed , until it collides with the medium .the position of a particle at the -th collision can be expressed as a random walk , i.e. , a sum of random variables .the flight lengths are assumedly identically distributed and obey an exponential probability density with .the exponential law in eq . stems from assuming a uniform distribution of the scattering centers in the traversed medium .heterogeneous materials , such as complex fluids , would generally lead to clustered scattering centers , obeying to , e.g. , negative binomial distributions , and in turn to non - exponential flight lengths .however , we will focus our attention on homogeneous media . at each collision , the particle randomly changes its direction according to the scattering kernel . for the sake of simplicity , we assume here that the scattering is isotropic , so that has a uniform distribution , independent of the incident direction .once a flight length has been sampled from , the new direction is therefore uniformly distributed on the -sphere of surface .therefore , by virtue of the apparent spherical symmetry , the transition kernel , i.e. , the probability density of performing a displacement from to , depends only on , and reads we initially neglect absorptions , so that : in one - speed transport , this condition can be either seen as the particles been scattered , or equivalently being absorbed and then re - emitted ( with the same speed ) at each collision .this latter interpretation would correspond , e.g. , to a _ criticality _condition in multiplicative systems for neutron transport .we define then the free propagator as the probability density of finding a particle at position at the -th collision , for an infinite medium , i.e. , in absence of boundaries .we adopt here the convention that the particle position and direction refer to the physical properties before entering the collision ; for instance , the index refers to uncollided particles , i.e. , particles coming from the source and entering their first collision . assuming that all the particles are isotropically emitted at , the particle density must depend only on the variable , because of the spherical symmetry . on the basis of the properties exposed above, the particle propagation as a function of the number of collisions is a markovian process in the variable , where for each collision the new propagator is given by the convolution integral with initial condition .in particular , by direct integration we immediately get the uncollided propagator it is convenient to introduce the -dimensional fourier transform of spherical - symmetrical functions , as in the subsequent analysis this will allow easier deriving of the properties of the exponential flights . denoting by the transformed variable with respect to , for any spherical - symmetrical function we have the following transform and anti - transform pair and where is the modified bessel function of the first kind , with index . it is apparent from eqs .that the dimension of the system plays a fundamental role , in that it affects both the transition kernel and the fourier transform kernel itself .= 9.0 cm the convolution integral in eq . in fourier spacegives the algebraic relation , with initial condition . by recursion, it follows that in the transformed space it turns out that the fourier transform of the transition kernel can be explicitly performed in arbitrary dimension , and gives where is the gauss hypergeometric function .hence , we finally obtain ^n .\label{p_z_n}\ ] ]the quantity is positive for ; moreover , , which ensures normalization and positivity of the propagator . in fig .[ fig1 ] we visually represent the effects of dimension and number of collisions on the shape of .remark in particular that the spread of increases with , for a given . on the contrary , becomes more peaked around the origin with growing , for a given .formally , performing the inverse fourier transform of eq .gives the propagator for an arbitrary -dimensional setup .actually , in some cases this task turns out to be non - trivial .nonetheless , even in absence of an explicit functional form for the propagator , information can be extracted by resorting to the tauberian theorems . in particular , the expansion of for the behavior of for , i.e. , far from the source , in the so - called _ diffusion limit _ ; viceversa , the expansion of for gives the behavior of for , i.e. , close to the source .we recall that is defined through the series at the leading order for we therefore have observe that eq .can be viewed as the expansion of an exponential function .then , the inverse fourier transform gives the gaussian shape which is valid for , playing the role of a diffusion coefficient .this stems from the exponential flights having finite - variance increments , , so that their probability density falls in the basin of attraction of the central limit theorem .we mention that clustered scattering centers , with non - exponential flight lengths , may lead to non - gaussian limiting statistics .remark the close analogy between eqs . and :in particular , and differ by a factor , which roughly speaking represents the average number of collisions per unit time .moreover , at the leading order for we have the expansion where the first term vanishes for . by inverse fouriertransforming , we have for when , and when . here and are constants depending on and .it can be shown that the divergence at the origin in due to the dirac delta source disappears after collisions for , and for .= 9.0 cm assume again that , i.e. , that there are no absorptions .the free propagator gives information on the position of a transported particle at the moment of entering the -th scattering collision .the link between the travelled distance , the flight time and the number of collisions is provided by the speed .indeed , once a flight of length between any two collisions has been sampled from , the flight time must satisfy .hence , the transition kernel , i.e. , the probability density of performing a displacement from at to at , will be given by it follows that inter - collision times are exponentially distributed where represents the average time between collisions .we define the propagator as the probability density of finding a particle at position at time at the -th collision . from the markov property of the process , at each collision we have with initial condition .in particular , by direct integration we immediately get the uncollided propagator = 9.0 cm we denote the laplace transform of a function by its argument , i.e. , then , from the double convolution integral in eq .we have the algebraic product in the fourier and laplace space , with . by recursion, it follows that in the transformed space we have it turns out that the fourier and laplace transform of the transition kernel can be explicitly performed in arbitrary dimension , and gives where .hence , we finally obtain ^n . \label{propagator_f_l}\ ] ] moreover , the following relation follows : , so that i.e. , can be intepreted as the time average of .finally , the propagator will be given by the sum of the contributions at each collision , namely taking the fourier and laplace transform of eq ., we then have with . in presence of absorptions ( ), the propagator can be obtained from eq . by integrating over directions .this relation holds true at each collision , so that we have where .hence , from eq . , by replacing with we get for the propagator where represents the average flight time between any two collisions .now , observe that from eq .we have where the quantity , , can be interpreted as the the probability of being scattered , i.e. , _ not _ being absorbed , at any given collision .then , by remarking that , it follows that the propagator in eq . is then given by the product of the free propagator , with the total cross section replacing the scattering cross section , times the probability of having survived up to entering the -th collision .remark that the total cross section and the non - absorption probability are related by . when the absorption length is infinite , , and we recover the free propagator for pure scattering , with .so far , we have assumed that the medium where particles are transported has an infinite extension , hence the name free propagator .more realistically , we might consider finite - extension media with volume enclosing the source , so that boundary conditions come into play and affect the nature of the propagator .several boundary conditions can be conceived according to the specific physical system , among which the most common are reflecting and leakage .this issue has been extensively examined , e.g. , for radiation shielding in reactor physics and for electron motion in semiconductors . herewe will focus on leakage boundary conditions , where particles are considered lost as soon as their trajectory has traversed the outer boundary of the domain .while the volume is in principle totally arbitrary , in the subsequent calculations for the sake of convenience we will assume that is a sphere of radius centered in .from the point of view of the propagator , leakages can be taken into account by assuming that the population density at any vanishes at the so - called extrapolation length , i.e. , .because trajectory do not terminate at the boundary , but rather at the first collision occurred outside the volume , the extrapolation length is expected to be larger than the physical boundary of and can be determined from solving the so - called milne s problem associated to the volume . in general , is of the kind ] , so that for a free propagator in absence of absorption is defined in terms of the following fourier integral whose convergence depends on the dimension of the system .it turns out that convergence is ensured for , which means that for and finite - size domains with transparent boundaries and diverge as .this result is in analogy with plya s theorem , which states that random walks on euclidean lattices are recurrent for .as shown in the following , we can nonetheless provide an estimate of such divergence as a function of , i.e. , single out a singular term from a functional form .for finite domains with leakage boundary conditions and/or absorptions ( ) , is defined also for and systems . for ,tauberian theorems show that the asymptotic behavior of eq . is given by for large , and close to the source .in the following , we detail the results pertaining to specific values of . we choose as length scale and we work with dimensionless spatial variables . remark that in absence of absorption the length scale is , since .the case allows illustrating the general structure of the calculations .one potential application of this framework could be provided by nanowires or carbon nanotubes ( almost systems ) in electron transport .the transition kernel is whose fourier transform is starting from , the free propagator can be explicitly obtained by performing the inverse fourier transform of , and reads where is the modified bessel function of the second kind , with index .the same formula has been recently derived , e.g. , in as a particular case of a broader class of random flights . in fig .[ figpn_d1 ] we provide a comparison between monte carlo simulation results ( symbols ) , the analytical formula eq .( solid lines ) , and the diffusion limit , eq .( dashed lines ) , for different values of .remark in particular that the diffusion limit is not accurate for small , and becomes progressively closer to the exact result for increasing , as expected . at intermediate ,the tails of the propagator are always fatter than those predicted by the diffusion approximation . in a setup ,the collision density for the free propagator diverges .nonetheless , it is possible to single out the divergence as follows for fixed , the inverse transform can be explicitly performed in terms of hypergeometric functions . retaining the non - vanishing terms for large , we have which is composed of a term diverging with ( not depending on ) , and a functional part which is linear in ( not depending on ) . for the propagator with absorptions , from eq .we have then , performing the inverse fourier transform , we get remark that eq. has been derived , e.g. , in by solving the stationary boltzmann equation in .when , the particles are almost surely absorbed at the first collision , and we have the expansion so that at the first order the collision density has the same functional form as the uncollided propagator . at the opposite , when the particles are almost surely always scattered ( ) , and we have the expansion and diverges as the collision density associated to the free propagator , as expected . the case of leakage boundary conditions can be dealt with by imposing that the propagator must vanish for any at the extrapolated boundary . for ,the extrapolated length is given by ] , where is the euler s gamma constant .the collision density with leakages at can be obtained again by the method of images , whence where ] , so that as expected from eq . .[ figpn_d3 ] we compare the monte carlo simulation results ( symbols ) with the diffusion limit , eq .( dashed lines ) , and with the approximate propagator , eq .( solid lines ) .remark that eq .provides a fairly accurate approximation of the simulation results , except close to the source . after carrying out the sum over , the collision density given by the following integral which again can not be performed explicitly .as before , we consider then the asymptotic behavior . denoting ] and .the moments of the residence time within a sphere of radius can be explicitly computed based on eq .for the free propagator , i.e. , when particles can freely cross the surface of the sphere .we have when , and for .moreover , for leakage boundary conditions at the surface , from we have when , and for . the case is briefly presented here for the sake of completeness .the transition kernel reads whose fourier transform is we could not find an explicit representation for the inverse fourier transform of . nonetheless , the propagator is known and reads ^{n-2}\ ] ] for .hence , it follows that the propagator can be obtained from solving the integral this integral can be performed , and gives , \label{propagator_4d}\ ] ] where being an hypergeometric function . as for the collision density , we have which can be computed explicitly and gives finally , the collision density in presence of leakages at the boundary can be obtained by resorting to the method of images , whence , with ] , which is small compared to . for odd , we can use the series representation now , carrying out the double sum over odd and over , from eq .we get for large . again , to obtain this result we have neglected a constant term of the kind /(2\pi)$ ] , which is small compared to . 10 b. d. hughes , _ random walks and random environments _ vol .i ( clarendon press , oxford , 1995 ) .g. h. weiss , _ aspects and applications of the random walk _ ( north holland press , amsterdam , 1994 ) .w. feller , _ an introduction to probability theory and its applications _, 3rd edition ( wiley , new york , 1970 ) .r. metzler and j. klafter , phys . rep . *339 * , 1 ( 2000 ) .j. dutka , arch .exact sci . * 32 * , 351 ( 1985 ) .g. i. bell and s. glasstone , _ nuclear reactor theory _( van nostrand reinhold , 1970 ) . m. weinberg and e. p. wigner , _ the physical theory of neutron chain reactors _ ( university of chicago press , 1958 ) . c. cercignani , _ the boltzmann equation and its applications _( springer , 1988 ) .h. t. hillen and g. othmer , siam j. appl .math * 61 * , 751 ( 2000 ) . c. jacoboni and p. lugli , _ the monte carlo method for semiconductor device simulation _( springer , 1989 ) .k. pearson , nature * 27 * , 294 ( 1905 ) .j. c. kluyver , proc .koninklijke akademie van wetenschappen te amsterdam * 8 * , 341 ( 1906 ) .j. w. strutt ( lord rayleigh ) , philos . magazine* 6 * ( 37 ) , 321 ( 1919 ) .m. abramowitz and i. a. stegun ( eds . ) , _ handbook of mathematical functions with formulas , graphs , and mathematical tables _ , 9th edition ( dover , ny , 1972 ) . i. lux and l. koblinger , _ monte carlo particle transport methods : neutron and photon calculations _ ( crc press , boca raton , 1991 ) .j. a. wesson , _ tokamaks _ , 3 ed .( clarendon press , 2003 ) .e. jakeman and r. j. a. tough , j. opt .am . a * 4 * , 1764 ( 1987 ) .t. kurosawa , proceedings of the 3 international conference on hot carriers in semiconductors ( 1964 ) . c. jacoboni and l. reggiani , rev .* 55 * , 645 ( 1983 ) .j. price , semicond .* 14 * , 249 ( 1979 ) .j. c. j. paasschens , phys .e * 56 * , 1135 ( 1997 ) .e. orsingher and a. de gregorio , j. theor . probab . *20 * , 769 ( 2007 ) .a. d. kolesnik , j. stat . phys . * 131 * , 1039 ( 2008 ) .g. le car , j. stat . phys .* 140 * , 728 ( 2010 ) .m. v. fischetti and s. e. laux , phys .b * 38 * , 9721 ( 1988 ) . m. v. fischetti and s. e. laux , phys .b * 48 * , 2244 ( 1993 ) .g. placzek and w. seidel , phys . rev . *72 * , 550 ( 1947 ) .i. freund , phys .a * 45 * , 8854 ( 1992 ) .s. redner , _ a guide to first - passage processes _ ( cambridge university press , cambridge , 2001 ). k. m. case and p. f. zweifel , _ linear transport theory _( addison - wesley , reading , 1967 ) .m. kac , _ probability and related topics in physical sciences _ ( lectures in applied mathematics , wiley , 1957 ) .a. m. berezhkovskii , v. zaloj , and n. agmon , phys . rev .e * 57 * , 3937 ( 1998 ) .a. zoia , e. dumonteil , and a. mazzolo , arxiv:1102.5291v1 .g. milton wing , _ an introduction to transport theory _( wiley , ny , 1962 ) .w. stadje , j. stat . phys . * 46 * , 207 ( 1987 ) .b. conolly and d. roberts , eurres . * 28 * , 308 ( 1987 ) . | in this paper we analyze some aspects of _ exponential flights _ , a stochastic process that governs the evolution of many random transport phenomena , such as neutron propagation , chemical / biological species migration , or electron motion . we introduce a general framework for -dimensional setups , and emphasize that exponential flights represent a deceivingly simple system , where in most cases closed - form formulas can hardly be obtained . we derive a number of novel exact ( where possible ) or asymptotic results , among which the stationary probability density for systems , a long - standing issue in physics , and the mean residence time in a given volume . bounded or unbounded , as well as scattering or absorbing domains are examined , and monte carlo simulations are performed so as to support our findings . |
the recent trend towards parallel computing in the financial industry is not surprising . as the complexity of models used in the industry grows , while the demand for fast ,sometimes real - time , solutions persists , parallel computing is a resource that is hard to ignore . in 2009 ,bloomberg and nvidia worked together to run a two - factor model for calculating hard - to - price asset - backed securities on 48 linux servers paired with graphics processing units ( gpus ) , which traditionally required about 1000 servers to accommodate customer demand .gpu computing offers several advantages over traditional parallel computing on clusters of cpus .clusters consume non negligible energy and space , and computations over clusters are not always easy to scale . in contrast , gpu is small , fast , and consumes only a tiny fraction of energy consumed by clusters .consequently , there has been a recent surge in academic papers and industry reports that document benefits of gpu computing in financial problems .arguably , the numerical method that benefits most from gpus is the monte carlo simulation .monte carlo methods are inherently parallel , and thus more suitable for implementing on gpu than most alternative methods . in this paperwe concentrate on monte carlo methods and financial simulation , and discuss computational and algorithmic issues when financial simulation algorithms are developed over gpu and traditional clusters .the computational framework we use is the estimation of an integral over the dimensional unit cube , using sums of the form in monte carlo and quasi - monte carlo , converges to as in the former the convergence is probabilistic and come from a pseudorandom sequence , and in the latter the convergence is deterministic and the come from a low - discrepancy sequence . for a comprehensive survey of monte carlo and quasi - monte carlo methods ,see .often it is desirable to obtain multiple independent estimates for say so that one could use statistics to measure the accuracy of the estimation by the use of sample standard deviation , or confidence intervals .let us assume that an allocation of computing resources is done and we choose parameters : the first parameter , is the sample size , and gives the number of vectors from the sequence ( pseudorandom or low discrepancy ) to use in estimating and the parameter gives the number of independent replications we obtain for , i.e. , the grand average gives the overall point estimate for in monte carlo , to obtain the independent estimates one simply uses blocks of pseudorandom numbers .in quasi - monte carlo , one has to use methods that enable independent randomizations of the underlying low - discrepancy sequence .these methods are called randomized quasi - monte carlo ( rqmc ) methods ( see ) .traditionally , in parallel implementations of monte carlo algorithms , one often assigns the processor ( of the allocated processors ) the evaluation of the estimate to do this computation , each processor needs to have an assigned number sequence ( pseudorandom or low - discrepancy ) and methods like blocking , leap - frogging , and parameterization are used to make this assignment .parameterization is particularly useful when independent replications are needed to compute ( see , and also , , ) .if only a single estimate is needed , then blocking or leap - frogging can be used ( , , , , ) .figure [ paral](a ) describes this traditional monte carlo implementation where the processor generates its assigned sequence to compute as . in many applications typically in millions , and is large enough for statistical accuracy , in the range 50 to 100 . in a massively parallel environment , depicted by the second diagram , where the number of processors much larger than , it can be a lot more efficient to completely `` transpose '' our computing strategy .now the processors run simultaneously ( for a total of times ) to generate the sequence to compute as , where is part of the sequence which is assigned to the processor .the choice of the two computing paradigms , which we vaguely name as `` parallel '' and `` massively parallel '' , determines how the underlying sequence ( pseudorandom or low - discrepancy ) should be generated . in the parallel paradigm ,a recursive algorithm for generating the underlying sequence works best since each processor generates the `` entire '' sequence .this paradigm is appropriate for a computing system with distributed memory , such as a cluster . for the massively parallel paradigm , a direct algorithm that generatesthe term of the sequence from is more appropriate . use the term `` counter - based '' to describe such direct algorithms .the massively parallel paradigm is an appropriate model for gpu computing where prohibitive cost of memory access makes recursive computing inefficient . in section [ section pseudorandom ]we briefly discuss a counter - based pseudorandom number generator , called * philox * , introduced by , and the pseudorandom number generators , * mersenne twister * , and * xorwow*. in section [ section rstart ] we introduce a randomized quasi - monte carlo sequence , which we name * rasrap * , and give algorithms for recursive and counter - based implementations of this sequence . in this section , we also give a brief description of a well - known quasi - monte carlo sequence , the sobol sequence .we will compare the computational time for generating these sequences on cpu and gpu , in section [ section timing comp ] .most pseudorandom number generators are inherently iterative : they are generated by successive application of a transformation to an element of the state space to obtain the next element of the state space , i.e. , . herewe discuss some of the pseudorandom number generators considered in this paper .one of the most popular and high quality pseudorandom number generators is the mersenne twister introduced by .it has a very large period and excellent uniformity properties .it is available in many platforms , and recently matlab adopted it as its default random number generator .a parallel implementation of the mersenne twister was also given by .their approach uses parameterization , and it falls under our parallel computing paradigm : each processor in the parallel environment generates a mersenne twister , and different mersenne twisters generated across different processors are assumed to be statistically independent .there are several parameters that need to be precomputed and stored to run the parallel implementation of mersenne twister .xorwow is a fast pseudorandom number generator introduced by .this generator is available in curand : a library for pseudorandom and quasi - random number generators for gpu provided by nvidia .however , the generator fails certain statistical tests ; see for a discussion .the reason we consider this generator is because of its availability in curand , and that its computational speed can be used as a benchmark against which other generators can be compared .philox is a counter - based pseudorandom number generator introduced by .its generation is in the form , and thus falls under our massively parallel computing paradigm .a comparison of some counter - based and conventional pseudorandom number generators ( including philox and mersenne twister ) is given in . in section[ section timing comp ] , we will present timing results comparing the pseudorandom number generators , and in section [ libor ] and [ mbs ] , we will compare these sequences when they are used in some financial problems .these numerical results will also include rasrap and sobol , two randomized - quasi monte carlo sequences that we discuss next .the van der corput sequence , and its generalization to higher dimensions , the halton sequence , are among the best well - known low - discrepancy sequences .the term of the van der corput sequence in base , , is defined as where the halton sequence in the bases is .this is a low - discrepancy sequence if the bases are relatively prime . in practice , is usually chosen as the prime number .there is a well - known defect of the halton sequence : in higher dimensions , when the base is larger , certain components of the sequence exhibit very poor uniformity .this is often referred to as _ high correlation between large bases ._ as a remedy , permuted ( or , scrambled ) halton sequences were introduced .the _ permuted van der corput sequence _ generalizes ( [ vdcp ] ) as where is a permutation on the digit set . by using different permutations for each base, one can define the permuted halton sequences in the usual way .there are many choices for permutations published in the literature ; a recent survey is given by . in this paper , we will follow the approach used in and pick these permutations at random .the halton sequence can be generated recursively , which would be appropriate for an implementation on cpu , or directly ( counter - based ) , which would be appropriate for gpu .next we discuss some recursive and counter - based algorithms for the halton sequence .a fast recursive method for generating the van der corput sequence was given by .we now explain his algorithm .let be a positive integer and arbitrary .define the sequence by and the transformation by where the transformation is called the von neumann - kakutani transformation in base the orbit of zero under i.e. , is the van der corput sequence in base . in fact, the orbit of any point under is a low - discrepancy sequence .if is chosen at random from the uniform distribution on then the orbit of under is called a _ random - start van der corput sequence _ in base the following algorithm summarizes the construction by of the ( random - start ) van der corput sequence in base it can be generalized to halton sequences in the obvious way .[struck alg ] . generates a random - start van der corput sequence with starting point and base 1 .generate the sequence according to ( [ bk ] ) ; 2 . choose an arbitrary starting point ; 3 .calculate according to ( [ k ] ) ; 4 . step 3 - 4 .algorithm [ struck alg ] is prone to rounding error in floating number operations due to the floor operation in ( [ k ] ) .for example , a c++ compiler gives a wrong index after 3 steps of iteration when the starting point is if the rounding error introduced in ( [ bk ] ) is not carefully handled .we now suggest an alternative algorithm that computes a random - start permuted halton sequence .the advantages of this algorithm over algorithm [ struck alg ] are : ( i ) it avoids rounding errors , ( ii ) it is faster , and ( iii ) it can be used to generate permuted halton sequences .[ recursive perm vdc ] ( recursive ) generates a random - start permuted van der corput sequence in base . 1 .initialization step .generate a random number and find some integer so that is the term in the van corput sequence in base .initialize and store a random digit permutation .expand in base as ( depends on ) .set for .store .calculate and store for .set for .set the quasi - random number ; 2 .let .find ; 3 .. set . set for .the quasi - random number corresponding to is ; 4 .repeat step 2 - 3 .algorithm 2 is an efficient iterative algorithm appropriate for the parallel computing paradigm .however , for the massively parallel computing paradigm , such as gpu computing , we need a counter - based algorithm . for the halton sequence, this would be simply its definition : ( counter - based ) [ definition vdc ] generates a random - start permuted van der corput sequence in base .1 . initialization step : choose a small positive real number , .generate a random number from the uniform distribution on , and find such that ; 2 .the quasi - random number corresponding to is ; 3 .let and repeat step 2 - 3 .the name rasrap is an abbreviation for random - start randomly permuted halton sequence : if in algorithms [ recursive perm vdc ] and [ definition vdc ] , the permutations for each base are generated at random , then we obtain rasrap . the sobol sequence is a well - known fast low - discrepancy sequence popular among financial engineers .the component of the vector in a sobol sequence is calculated by , where is the digit from the right when integer is represented in base and is the bitwise exclusive - or operator .the so - called direction numbers , , are defined as to generate the sobol sequence , we need to generate a sequence of positive integers . the sequence is defined recursively as follows : , where are coefficients of a primitive polynomial of degree in the field , .the initial values can be chosen freely given that each , is odd and less than . because of this freedom , different choices for direction numbers can be made based on different search criteria minimizing the discrepancy of the sequence .we use the primitive polynomials and direction numbers provided by .the counter - based implementation of the sobol sequence introduced here is convenient on gpus , but a more efficient implementation proposed by antonov and saleev based on gray code is used in practice on cpus . for details about this approach ,see .the sobol sequence can be randomized using various randomized quasi - monte carlo methods . herewe will use the random digit scrambling method of .more on randomized quasi - monte carlo and some parallel implementations can be found in , and , .mersenne twister , philox , xorwow , rasrap , and sobol sequences are run on intel i7 3770k and nvidia geforce gtx 670 .we compare the throughput of different algorithms on cpu ( table 1 ) and gpu ( table 2 ) .table 1 shows that the fastest algorithm for the halton sequence on cpu is algorithm 2 .it is about 3.8 times as fast as the algorithm by struckmeier ( algorithm 1 ) .not surprisingly algorithm 3 , the counter - based implementation , is considerably slower on cpu .mersenne twister uses its serial cpu implementation and it is about 3.4 times faster than algorithm 2 for the halton sequence . and sobol sequence based on gray code is faster than mersenne twister . table 2 shows that the throughput of algorithm 3 on gpu improves significantly compared to the cpu value .counter - based sobol sequence is twice as fast as rasrap , and the pseudorandom number generator philox is almost 200 times faster than rasrap .[ gencpu ] the computational speed at which various sequences are generated is only one part of the story .we next examine the accuracy of the estimates obtained when these sequences are used in simulation . in the next section, we use these sequences in two problems from computational finance , and compare them with respect to the standard deviation of their estimates and computational speed .an interest rate derivative is a derivative where the underlying asset is the right to pay or receive a notional amount of money at a given interest rate .the interest rate derivatives market is the largest derivatives market in the world . to price interest rate derivatives , forwardinterest rate models are widely used in the industry .there are two kinds of forward rate models : the continuous rate model and the simple rate model .the framework developed by ( hjm ) explicitly describes the dynamics of the term structure of the interest rates through the dynamics of the forward rate curve .hjm model has two major drawbacks : ( 1 ) the instantaneous forward rates are not directly observable in the market ; ( 2 ) some simple choices of the form of volatility is not admissible . in practice , many fixed income securities quote the interest rate on an annual basis with semi - annual or quarterly compounding , instead of a continuously compounded rate .the simple forward rate models describe the dynamics of the term structure of interest rates through simple forward rates , which are observable in the market .this approach is developed by , , and .the london inter - bank offered rates ( libor ) is one of the most important benchmark simple interest rates .let denote the time- value of a zero coupon bond paying 1 at the maturity time .a forward rate ( ) is an interest rate fixed at time for borrowing or lending at time over the period ] . equation ( [ fwdlibor ] ) can be then rewritten as the subscript emphasizes we are looking at a finite set of bonds .the dynamics of the forward libor rates can be described as a system of sdes as follows . for a brief informal derivation ,see . where is a -dimensional standard brownian motion and the volatility may depend on the current vector of rates as well as the current time . is the unique integer such that .pricing interest rate derivative securities with libor market models normally requires simulations . since the libor market model deals with a finite number of maturities , only the time variable needs to be discretized .we fix a time grid to simulate the libor market model . in practice, one would often take so the simulation goes directly from one maturity date to the next . for simplicity, we use a constant volatility in the simulation .we apply an euler scheme to ( [ dl_n ] ) to discretize the system of sdes of the libor market model , producing + \hat{l}_n(t_i)\sqrt{t_{i+1 } - t_i } \sigma_n(t_i)^{\top}z_{i+1},\ ] ] where and are independent random vectors in . herehats are used to identify discretized variables .we assume an initial set of bond prices is given and initialize the simulation by setting in accordance with ( [ fwdliborn ] ) .next we use the simulated evolution of libor market rates to price a caplet .an interest rate cap is a portfolio of options that serve to limit the interest paid on a floating rate liability over a set of consecutive periods .each individual option in the cap applies to a single period and is called a caplet .it is sufficient to price caplets since the value of a cap is simply the sum of the values of its component caplets .we follow the derivation in .consider a caplet for the time period ] .then the payoff function at time is equal to at time .this payoff typically requires the simulation of the dynamics of the term structure . derived a formula for the time- price of the caplet under the assumption of following a lognormal distribution , which does not necessarily correspond to a price in the sense of the theory of derivatives valuation . in practice , this formula is used to calculate the `` implied volatility '' from the market price of caps . to test the correctness of the libor market model simulation , we use the daily treasury yield curve rates on 02/24/2012 as shown in table 3 to initialize the libor market rates simulation .we first apply a cubic spline interpolation to the rates in table 3 to get estimated yield curve rates for every 6 months .then the estimated yield curve rates are used to calculate the bond prices for every 6 months in order to initialize the libor rates in ( [ bl ] ) .we assume the following parameters in libor rates simulation the simulations are run on intel i7 3770k and nvidia geforce gtx 670 respectively . for a fixed sample size , we repeat the simulation 100 times using independent realizations of the underlying sequence .we investigate the sample standard deviation of the 100 estimates and computing time as a function of the sample size .we also compare the efficiency of different sequences , where efficiency is defined as the product of sample standard deviation and execution time .the sobol sequence and a scrambled version of it are provided in the curand library from nvidia .we use both the single precision version ( sobol-lib(single ) ) and double precision version ( sobol-lib(double ) ) in our simulation .we also implement our own version of the sobol sequence ( sobol(single ) and sobol(double ) ) for comparison .figure [ sobolcomp ] plots the sample standard deviation of 100 estimates for the caplet price , computing time , and efficiency , of different implementations of the sobol sequence against the sample size .we also include the numerical results obtained using the fast pseudorandom number sequence xorwow from curand as a reference .we make the following observations : 1 .the convergence rate exhibits a strange behavior and levels off for the curand sobol sequence generators , sobol-lib(single ) and sobol-lib(double ) , as gets large .our implementation of the sobol sequence gives monotonically decreasing sample standard deviation as increases ; 2 .the execution time for curand generators sobol-lib(single ) and sobol-lib(double ) is significantly longer than our implementation , and not monotonic for a specific range of ; 3 . the efficiency of curand generators sobol-lib(single ) and sobol-lib(double ) is even worse than the efficiency of the pseudorandom number sequence xorwow .our sobol sequence implementations have better efficiency than xorwow .due to the poor behavior of the sobol sequence in the curand library , we will use our implementation of the sobol sequence with single precision in the rest of the paper .we will denote this sequence simply as sobol " in the numerical results . in section [ section timing comp ] , we compared the computing times of several sequences .here we compare the performance of mersenne twister , rasrap and sobol , when they are used in simulating the libor market model .the sequences are run on one cpu core .figure [ singlecpulibor ] shows that the sample standard deviation of the estimates obtained from rasrap and sobol sequences converge at a much faster rate than the mersenne twister .the convergence rate for mersenne twister is about , and the rate for rasrap and sobol is about and , respectively .the recursive implementation of rasrap does not introduce much overhead in running time and gives very close timing results to mersenne twister .the sobol sequence based on gray code is faster than mersenne twister . as a result ,the two low - discrepancy sequences enjoy better and flatter " efficiency than that of mersenne twister .we next investigate how well rasrap and sobol sequence results scale over multi - core cpu .we implement a parallel version of rasrap and sobol with openmp that can run on 8 cpu cores simultaneously .figure [ liborcpu ] plots the performance of openmp version of rasrap and sobol on cpu .it exhibits the same pattern of convergence , running time , and efficiency as in figure [ singlecpulibor ] .the convergence remains the same as in the one core case , but we gain a speedup of four with the parallelism using openmp . in this sectionwe compare the counter - based implementations of rasrap and sobol with pseudorandom sequences philox and xorwow , on gpu .figure [ liborgpu ] plots the sample standard deviation , computing time , and effciency .we make the following observations : 1 .the convergence rate for philox and xorwow is about and , respectively ; 2 .the convergence rate for rasrap and sobol is about and respectively ; 3 .xorwow is the fastest generator , followed by philox and sobol. rasrap is slightly slower than sobol ; 4 .the efficiency of sobol is the best among all sequences .we follow the mortgage - backed securities ( mbs ) model given by . consider a security backed by mortgages of length with fixed interest rate which is the interest rate at the beginning of the mortgage .the present value of the security is then , where is the expectation over the random variables involved in the interest rate fluctuations .the parameters in the model are the following : discount factor for month cash flow for month interest rate for month fraction of remaining mortgages prepaying in month fraction of remaining mortgages at month ( remaining annuity at month ) monthly payment random variable .the model defines several of these variables as follows : the interest rate fluctuations and the prepayment rate are given by where are constants of the model .the constant is chosen to normalize the log - normal distribution so that .the initial interest rate also needs to be specified .we choose the following parameters in our numerical results : figure [ mbscpu ] compares openmp implementations of rasrap and sobol sequences on 8 cpu cores .the sample standard deviation of estimates obtained by rasrap is smaller than that of sobol for every sample size , however , the sobol sequence gives a better rate of convergence .we gain a speedup of 6 with the parallelism using openmp compared to the single core version .rasrap has the better efficiency for all sample sizes .figure [ mbsgpu ] compares the gpu implementations of rasrap , sobol , philox , and xorwow .we observe : 1 .the convergence rate for philox and xorwow is about ; 2 .rasrap gives lower standard deviation than sobol , however , the convergence rate for sobol ( ) is better than rasrap ( ) ; 3 .the efficiency of rasrap is the best among all sequences .in figure [ hist ] , we display the gpu speed - up over cpu for both libor and mbs examples . these results only consider the computing time , and the computing time of cpu - twister is taken as the base value in each example .the largest speed - up is a factor of 95 and it is due to gpu - xorwow for the libor market model simulation . in the mbs example , gpu - rasrap speed - up is a factor of 250 , and the other gpu sequences give a speed - up of factor 290 . finally , to demonstrate the impressive computing power of gpu , we compare gpu with the high performance computing ( hpc ) cluster at florida state university .we implement a parallel sobol sequence using mpi , and run simulations for the two examples , libor and mbs .figure [ cluster ] plots the computing time against the number of cores used by the cluster , when the sample size takes various values . the gpu computing time is plotted as a horizontal line since all the cores of gpu are used in computations .figure [ cluster ] shows that for the libor example , the gpu we used in our computations has equivalent computing power roughly as 128 nodes on the hpc cluster .this is about when the hpc computing time plot reaches the level of gpu computing time , for each . in the mbs example , 256 nodes on the hpc cluster are equivalent to the gpu .we also point out that on a heterogeneous computing environment such as a cluster , continually increasing the number of nodes will not necessarily decrease the running time due to higher cost of communication between nodes and higher probability that slow nodes are used .but for gpus , a more powerful product with more cores would suggest gains in computing time .chen , g. , thulasiraman , p. and thulasiram , r.k ., distributed quasi - monte carlo algorithm for option pricing on hnows using mpc . in _ proceedings of the 39th annual simulation symposium _ , huntsville , usa , 26 april 2006 , pp .9097 , 2006 .dedoncker , e. , zanny , r. , ciobanu , m. and guan , y. , distributed quasi monte - carlo methods in a heterogeneous environment . in_ proceedings of the 9th heterogeneous computing workshop _ ,cancun , mexico , may 2000 , pp .200206 , 2000 .hofbauer , h. , uhl , a. and zinterhof , p. , parameterization of zinterhof sequences for grid - based qmc integration . in j. volkert , t. fahringer , d. kranzlmller , and w. schreiner , editors , _ proceedings of the 2nd austrian grid symposium _ ,volume 221 of books.at , innsbruck , austria , 2007 , pp . 91105 , 2007 .austrian computer society .matsumoto , m. and nishimura , t. , mersenne twister : a 623-dimensionally equidistributed uniform pseudorandom number generator ._ acm transactions on modeling and computer simulations , _ 1998 , * 8(1 ) * , 330 .kten , g. and srinivasan , a. , parallel quasi - monte carlo applications on a heterogeneous cluster . in k. t.fang , f. j. hickernell and h. niederreiter , editors , _ monte carlo and quasi - monte carlo methods 2000 _ , springer - verlag , berlin , 2002 , pp .406421 .kten , g. , shah , m. and goncharov , y. , random and deterministic digit permutations of the halton sequence . in l. plaskota and h.woniakowski , editors , _ monte carlo and quasi - monte carlo methods 2010 _ , springer , 2012 .saito , m. and matsumoto , m. , a deviation of curand : standard pseudorandom number generator in cuda for gpgpu . in _10th international confrence on monte carlo and quasi - monte carlo methods in scientific computing _ ,sydney , australia , 1317 february 2012 .salmon , j.k . ,moraes , m.a ., dror , r.o . and shaw , d.e ., parallel random numbers : as easy as 1 , 2 , 3 . in _ sc11 proceedings of 2011 international conference for high performance computing , networking , storage and analysis _ , ny , usa , 2011 .vandewoestyne , b. and cools , r. , good permutations for deterministic scrambled halton sequences in terms of -discrepancy ._ journal of computational and applied mathematics _ , 2006 , * 189 * , 341361 . | gpu computing has become popular in computational finance and many financial institutions are moving their cpu based applications to the gpu platform . since most monte carlo algorithms are embarrassingly parallel , they benefit greatly from parallel implementations , and consequently monte carlo has become a focal point in gpu computing . gpu speed - up examples reported in the literature often involve monte carlo algorithms , and there are software tools commercially available that help migrate monte carlo financial pricing models to gpu . we present a survey of monte carlo and randomized quasi - monte carlo methods , and discuss existing ( quasi ) monte carlo sequences in gpu libraries . we discuss specific features of gpu architecture relevant for developing efficient ( quasi ) monte carlo methods . we introduce a recent randomized quasi - monte carlo method , and compare it with some of the existing implementations on gpu , when they are used in pricing caplets in the libor market model and mortgage backed securities . gpu , monte carlo , randomized quasi - monte carlo , libor , mortgage backed securities [ [ section ] ] |
research in compressed sensing is expanding rapidly .the sufficient condition for -recovery based on the restricted isometry property ( rip ) is one of the celebrated results in this field .the design of sensing matrices with small rip constants is a theoretically interesting and challenging problem .currently , random constructions provide the strongest results , and the analysis of random constructions is based on large deviations of maximum and minimum singular values of random matrices . in the present paper , a random construction of bipolar sensing matrices based on binary linear codesis introduced and its rip is analyzed .the column vectors of the proposed sensing matrix are nonzero codewords of a randomly chosen binary linear code . using a generator matrix , a sensing matrix can be represented by -bits .the existence of sensing matrices with the rip is shown based on an argument on the ensemble average of the weight distribution of binary linear codes .the symbols and represent the field of real numbers and the finite field with two elements , respectively .the set of all real matrices is denoted by . in the present paper ,the notation indicates that is a column vector of length .the notation denotes -norm defined by the -norm is defined by where denotes the index set of nonzero components of .the functions and are the hamming weight and hamming distance functions , respectively .let be a real matrix , where the -norm of the -th ) ] represents the set of consecutive integers from to . the restricted isometry property of introduced by candes and tao plays a key role in a sufficient condition of -recovery .a vector is called an -sparse ) ] , and assume that has the rip with for any -sparse vector ( i.e. , ) , the solution of the following -minimization problem coincides exactly with , where .note that considers stronger reconstruction results ( i.e. , robust reconstruction ) .the matrix in ( [ l1recovery ] ) is called a _ sensing matrix_. the incoherence of defined below and the rip constant are closely related .the incoherence of is defined by , i \ne j } |\phi_i^t \phi_j|.\ ] ] the following lemma shows the relation between the incoherence and the rip constant .similar bounds are well known ( e.g., ) .[ deltaupper ] assume that is given . for any ] , which is defined by in the present paper , we consider an ensemble of binary parity check matrices , which is referred to herein as the _random ensemble_. the random ensemble contains all binary matrices .equal probability is assigned to each matrix in .let be a real - valued function defined on , which can be considered as a _random variable _ defined over the ensemble .the expectation of with respect to the ensemble is defined by { \stackrel{\triangle}{=}}\sum_{h \in r_{r , p } } p(h ) f(h).\ ] ] the expectation of weight distributions with respect to the random ensemble has been reported to be = { p \choose w } 2^{-r}.\ ] ] in the following , a combination of average weight distribution and markov inequality is used to show that the rip holds for with overwhelmingly high probability .[ first ] assume that we draw a parity check matrix from .the probability of selecting that satisfies is lower bounded by ( proof ) let us define as for .the condition implies that for any .namely , if holds , then is proven to be smaller than or equal to by lemma [ ripbound ] .next , we evaluate the ensemble expectation of : & = & \sum_{w=1}^{\lfloor ( \frac{1-\epsilon}{2 } ) p \rfloor } e_{r_{r , p}}[a_w(h ) ] \\ \nonumber & + & \sum_{w=\lceil ( \frac{1+\epsilon}{2 } ) p \rceil}^{p } e_{r_{r , p}}[a_w(h ) ] \\ \nonumber & = & \sum_{w=1}^{\lfloor ( \frac{1-\epsilon}{2 } ) p \rfloor } 2^{-r } { p \choose w } + \sum_{w=\lceil ( \frac{1+\epsilon}{2 } ) p \rceil}^{p } 2^{-r } { p \choose w } \\ & < & 2^{1-r } \sum_{w=0}^{\lfloor ( \frac{1-\epsilon}{2 } ) p \rfloor } { p \choose w } .\end{aligned}\ ] ] the final inequality is due to the following identity on the binomial coefficients : ,\quad { p \choose w } = { p \choose p - w } .\ ] ] using the markov inequality , we obtain the following upper bound on the probability of the event : & \le & e_{r_{r , p}}[k_\epsilon(h ) ] \\ & < & 2^{1-r } \sum_{w=0}^{\lfloor ( \frac{1-\epsilon}{2 } ) p \rfloor } { p \choose w } .\end{aligned}\ ] ] since takes a non - negative integer - value , we have > 1 - 2^{1-r } \sum_{w=0}^{\lfloor ( \frac{1-\epsilon}{2 } ) p \rfloor } { p \choose w } .\ ] ] this completes the proof .the following theorem is the main contribution of the present paper .[ main ] assume that is chosen randomly according to the probability assignment of . if holds , then holds with probability greater than where .the constant is given by ( proof ) a simpler upper bound on is required .using the inequality on binomial coefficients : we have where is the binary entropy function defined by in order to consider the exponent of an upper bound , we take the logarithm of ( [ simpler ] ) and obtain an upper bound of the exponent : \hspace{-4mm}&<&\hspace{-2mm}1 + \log_2 ( m+1 ) - p + \log_2 p \\ \label{exponentp } & + & \hspace{-2mm}p h\left(\frac{1-\epsilon}{2 }\right ) \\ & < &\hspace{-2mm}1 + 2\log_2 ( m+1 ) - \frac{1}{2 } p \epsilon^2 .\end{aligned}\ ] ] in the above derivation , we used the relation and the assumption . a quadratic upper bound on the binary entropy function ( lemma [ entropybound ] in appendix )was also exploited to bound the entropy term .letting we have lemma [ deltaupper ] and lemma [ first ] imply that , in this case , holds with probability greater than . due to lemma [ deltaupper ] , the -recovery condition ( [ sqr2 ] ) can be written as from this inequality , we have which proves the claim of the theorem . in this subsection ,the asymptotic properties of the proposed construction are given .[ second ] assume that we draw a parity check matrix from .the probability of selecting that satisfies is upper bounded by ( proof ) here , we use a variant of chebyschev s inequality : \le \frac{var_{r_{r , p}}(k_{\epsilon}(h))}{e_{r_{r , p}}[k_{\epsilon}(h ) ] ^2},\ ] ] where denotes the variance with respect to . the variance is given by where and .the covariance of weight distributions denoted by is defined as follows : \\ & - & e_{r_{r , p}}[a_{w_1}(h ) ] e_{r_{r , p}}[a_{w_2}(h ) ] \end{aligned}\ ] ] for ] .applying the covariance formula to ( [ covcov ] ) , we have plugging the expectation of & = & 2^{-r}\left ( \sum_{w=1}^{a } { p \choose w } + \sum_{w = b}^{p } { p \choose w } \right ) \\ & = & 2^{-r}\left ( 2\sum_{w=0}^{a } { p \choose w } -1 \right ) \end{aligned}\ ] ] and the upper bound on the variance ( [ covbound ] ) into ( [ chebyschev ] ) proves the lemma. the asymptotic behavior of ] is summarized in the following theorem .[ asymptoticth ] assume that is fixed .let \\f_2(\epsilon , \alpha ) & { \stackrel{\triangle}{= } } & \lim_{p \rightarrow \infty } \frac 1 p \log_2 prob[k_\epsilon(h)\ne0].\end{aligned}\ ] ] the following inequalities give upper bounds on and , respectively : ( proof ) we first discuss ( [ f1 ] ) .let using the inequality on the binomial coefficients can be bounded from below : the inequality ( [ secondmoment ] ) can be simplified as for sufficiently large .the right - hand side of the above inequality can be bounded from above using ( [ entropylower ] ) : we are now able to derive the inequality given in ( [ f1 ] ) as follows : \\ & = & \alpha - h\left(\frac{1-\epsilon}{2 } \right).\end{aligned}\ ] ] the inequality given in ( [ f2 ] ) is readily obtained from ( [ exponentp ] ) .theorem [ asymptoticth ] implies a sharp threshold behavior in the asymptotic regime .let be which is referred to as the _ critical exponent_. if , ( [ f1 ] ) means that the probability to draw a matrix with decreases exponentially as goes to infinity . on the other hand , ( [ f2 ] ) indicates that the probability _ not _ to select a matrix with decreases exponentially if .in the present paper , a construction of a bipolar sensing matrix is introduced and its rip is analyzed .the existence of sensing matrices with the rip has been shown based on a probabilistic argument .an advantage of this type of sensing matrix is its compactness .a sensor requires -bits in order to store a truly random bipolar matrix . on the other hand , we need only -bits to store because we can use a generator matrix of as a compact representation of . however , this limited randomness of matrices results in a penalty on the rip constant .although the present construction is based on a probabilistic construction , the results shown in theorem [ main ] are weaker than the -recovery condition for the truly random bipolar matrix ensemble shown in .the condition shown in theorem [ main ] can be written as and is more similar to the conditions of deterministic constructions , such as that given in .lemma [ ripbound ] may be useful for evaluating the goodness of a randomly generated instance .the weight distribution of can be evaluated with time complexity , and an upper bound on the rip constant can be obtained using lemma [ ripbound ] .[ entropybound ] the following inequality approaches to . ] holds for .+ ( proof ) let be the domain of which is .the first and second derivatives of are given by and respectively .it is easy to verify that for , which indicates that is convex .thus , we can obtain the global minimum of by solving , and we have and .let be an index set satisfying .for any , we have where is a sub - matrix of composed from the columns corresponding to the index set .for any , holds since we use this inequality to bound in ( [ cicj ] ) and obtain similarly , can be lower bounded by from the definition of , the lemma is proven .the author would like to thank the anonymous reviewers of ieee information theory workshop 2009 for their constructive comments .the present study was supported in part by the ministry of education , science , sports and culture of japan through a grant - in - aid for scientific research on priority areas ( deepening and expansion of statistical informatics ) 180790091 .e.candes , j. romberg and t.tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' ieee trans . on information theory ,vol.52(2 ) , pp . 489 509 , 2006 .t.wadayama , `` on undetected error probability of binary matrix ensembles '' , in proceedings of ieee international symposium on information theory ( isit2008 ) , tronto ( 2008 ) ( related preprint : arxiv:0705.3995 ) . | a random construction of bipolar sensing matrices based on binary linear codes is introduced and its rip ( restricted isometry property ) is analyzed based on an argument on the ensemble average of the weight distribution of binary linear codes . |
max - stable processes are a canonical class of statistical models for multivariate extremes .they appear in a variety of applications ranging from insurance and finance to spatial extremes such as precipitation and extreme temperature .max - stable processes are exactly the class of non - degenerate stochastic processes that arise from limits of independent component - wise maxima .this fact provides a theoretical justification for their use as models of multivariate extremes .however , many useful max - stable models suffer from intractable likelihoods , thus prohibiting standard maximum likelihood and bayesian inference .this has motivated development of maximum _ composite _ likelihood estimators ( mcle ) for max - stable models as well as certain approximate bayesian approaches .in contrast to their likelihoods , the cumulative distribution functions ( cdfs ) for many max - stable models are available in closed form , or they are tractable enough to approximate within arbitrary precision .this motivates statistical inference based on the minimum distance method . in this paper, we propose an m - estimator for parametric max - stable models based on minimizing distances of the type where is a -dimensional cdf of a parametric model , is a corresponding empirical cdf and is a tuning measure that emphasizes various regions of the sample space using elementary manipulations it can be shown that minimizing distances of the type is equivalent to minimizing the _ continuous ranked probability score _ ( crps ) .[ def:(crps - m - estimator)](crps m - estimator ) let be a measure that can be tuned to emphasize regions of a sample space .define the crps functional then for independent random vectors with common distribution function we define the following crps m - estimator for . for simplicity , we shall assume throughout that the parameter space is a compact subset of , for some integer .the remainder of this paper is organized as follows . in section [ sec : extreme - values - and - nax - stability ] we review some essential multivariate extreme value theory and provide definitions and constructions of max - stable models . in section [ sec : crps - m - estimation ] we establish regularity conditions for consistency and asymptotic normality of the crps m - estimator and provide general formulae for calculating its asymptotic covariance matrix . in section [ sec : crps - for - max - stable ] we specialize these calculations to the max - stable setting . in section[ sec : simulation ] we conduct a simulation study to evaluate the proposed estimator for popular max - stable models .let be independent and identically distributed measurements of certain environmental or physical phenomena .for example , the may model wave - height , temperature , precipitation , or pollutant concentration levels at a site in a spatial region .if one is interested in extremes , it is natural to consider the asymptotic behavior of the point - wise maxima .suppose that , for some and , we have for some non - trivial limit process , where denotes convergence of the finite - dimensional distributions .the class of _ extreme value _ processes arising in the limit describe the statistical dependence of ` worst case scenaria ' and are therefore natural models of multivariate extremes .the limit in is necessarily a _max - stable process _ in the sense that for all , there exist and , such that where are independent copies of and where means equality of finite - dimensional distributions ( ch.5 of * ? ? ?due to the classic results of fisher - tippett and gnedenko , the marginals of are necessarily extreme value distributions ( frchet , reversed weibul or gumbel ) .they can be described in a unified way through the generalized extreme value distribution ( gev ) : where , and where and are known as the location , scale and shape parameters .the cases and correspond to frchet , reverse weibull , and gumbel , respectively ( see , e.g. ch.3 and 6.3 in for more details ) .the dependence structure of the limit extreme value process rather than its marginals is of utmost interest in practice .arguably , the type of the marginals is unrelated to the dependence structure of and as it is customarily done , we shall assume that the limit process has been transformed to standard -frchet marginals .that is , for some scale ( ch.5 of * ? ? ?let be a max - stable process with -frchet marginals as in .then , its finite - dimensional distributions are multivariate max - stable random vectors and they have the following representation : where and where is a finite measure on the positive unit sphere known as the _ spectral measure _ of the max - stable random vector ( see e.g. proposition 5.11 in * ? ? ?the integral in the expression is referred to as the _ tail dependence function _ of the max - stable law .we shall often use the notation : where , for the tail dependence function of the max - stable random vector .it readily follows from that for all , the max - linear combination is -frchet random variable with scale .conversely , a random vector with the property that all its non - negative max - linear combinations are -frchet is necessarily multivariate max - stable .this invariance to max - linear combinations is an important feature that will be used in our estimation methodology ( section [ sec : crps - for - max - stable ] , below ) .some max - stable models are readily expressed in terms of their spectral measures while others via tail dependence functions .these representations however are not convenient for computer simulation or in the case of random processes , where one needs a handle on all finite - dimensional distributions .the most common constructive representation of max - stable process models is based on poisson point processes .see also for an alternative .indeed , consider a measure space and let be a poisson point process on with intensity measure .[ p : de - haan - simple ] let be a collection of non - negative integrable functions and let then , the process is max - stable with -frchet marginals and finite - dimensional distributions : the proof of this result is sketched in appendix [ sub : derivation - of - max - stablecdf ] .relation is known as the _ de haan spectral representation _ of and as the spectral functions of the process .it can be shown that every separable in probability max - stable process has such a representation ( see and proposition 3.2 in ) .the max - functional in has the properties of an _ extremal stochastic integral_. indeed , we have max - linearity : for all the above max - linear combination is therefore -frchet and has a scale coefficient : one can also show that and are independent , if and only if , for -almost all .that is , the extremal integrals defining and are over non - overlapping sets .this shows that for max - stable process models pairwise independence implies independence .further , converges in probability to if and only if converges in to , as . for more details , see e.g and .the expressions and may be related through a change of variables ( proposition 5.11 ) . while the spectral measure in is unique , a max - stable process has many different spectral function representations .nevertheless , relation provides a constructive and intuitive representation of , that can be used to build interpretable models .a great variety of max - stable models can be defined by specifying the measure space and an accompanying family of spectral functions or equivalently through a consistent family of spectral measures or tail dependence functions .we review next several popular max - stable models and their basic features . _ ( multivariate logistic ) _ let have the cdf for and . ] is non - negative and equals zero if and only if and are independent , analagous to covariance for gaussian processes. _ ( extremal coefficient ) _ a popular summary measure of multivariate dependence in max - stable models is the extremal coefficient .define for a process with standard 1-frchet marginals and thus , where corresponds to complete dependence while implies that s , are independent .it is well know that for a process with standard -frchet marginals , ] and by condition _( ii ) _ , thus , by the lebesgue dominated convergence theorem \mu\left(dx\right)=\int_{\mathbb{r}^{d}}\lim_{n\to\infty}\mathbb{e}\left[\overline{f}_{n}\left(x\right)\wedge\overline{f}_{\theta_{0}}\left(x\right)\right]\mu\left(dx\right).\ ] ] the strong law of large numbers implies that converges almost surely to .hence , by applying dominated convergence again , we obtain =\overline{f}_{\theta_{0}}\left(x\right),\ \ \mbox{for all } x\in\mathbb{r}^{d}.\ ] ] this , by implies that the right - hand side of vanishes as , which in view of yields the desired convergence in probability and the proof is complete . by a standard argument using the lebesgue dct , condition _ ( iii )_ of this proposition ensures that integration and differentiation can be interchanged in all that follows .we proceed by establishing _( i)-(iii ) _ of theorem [ thm:(asynormcrps ) ] . _( ii ) _ observe that equals \left(f_{\theta_{1}}\left(y\right)-f_{\theta_{2}}\left(y\right)\right)\right\ } \mu\left(dy\right)\right|\\ \le2\int_{\mathbb{r}^{d}}\left|f_{\theta_{1}}\left(y\right)-f_{\theta_{2}}\left(y\right)\right|\mu\left(dy\right)\end{gathered}\ ] ] where the last relation follows from the triangle inequality and fact that then , by the mean value theorem and the cauchy - schwartz inequality where by assumption _( ii ) _ of this proposition , is finite . hence _( ii ) _ of theorem [ thm:(asynormcrps ) ] holds where is constant ( and therefore trivially ) .finally , we derive by considering its entry .let denote .\\ & = & \mathbb{e}\left\ { \int_{\mathbb{r}^{d}}2\left(f_{\theta}\left(y_{1}\right)-\mathbf{1}_{\left\ { x\le y_{1}\right\ } } \right)\partial_{i}f_{\theta}\left(y_{1}\right)\mu\left(dy_{1}\right)\right.\\ & & \qquad\times\left.\left.\int_{\mathbb{r}^{d}}2\left(f_{\theta}\left(y_{2}\right)-\mathbf{1}_{\left\ { x\le y_{2}\right\ } } \right)\partial_{j}f_{\theta}\left(y_{2}\right)\mu\left(dy_{2}\right)\right|_{\theta=\theta_{0}}\right\ } \\ & = & 4\mathbb{e}\left\ { \left.\int_{\mathbb{r}^{d}}\int_{\mathbb{r}^{d}}b_{\theta}\left(x , y_{1},y_{2}\right)\partial_{i}f_{\theta}\left(y_{1}\right)\partial_{j}f_{\theta}\left(y_{2}\right)\mu\left(dy_{1}\right)\mu\left(dy_{2}\right)\right|_{\theta=\theta_{0}}\right\ } \end{aligned}\ ] ] where expanding the integrand and applying fubini gives where which is exactly the element of , as desired . [ [ proofs - for - section - seccrps - for - max - stablesubproofs - for - section-4 ] ] proofs for section [ sec : crps - for - max - stable][sub : proofs - for - section-4 ] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ observe that for and hence the integrand in vanishes , as .also , by using a taylor series expansion of the exponential function at zero , it is easy to see that as .therefore , the integral in is finite .now , by making the changes of variables and in the last two integrals respectively , we obtain this , in view of , yields the expression in terms of the incomplete gamma function in . for note that is equal in distribution to a -frchet random variable with scale which has finite expectation for , applying fubini s theorem , and observing that , with this establishes . for , substituting the espression from lemma [ lem : frechetintegral ] we have \ ] ] which , after substituting and yeilds .\label{eq : efrechetint1}\ ] ] using the fact that is distributed -frchet with scale we have now the substitution gives plugging into yields \\ & = 2\sqrt{\pi}\left[\frac{2v_{0}}{\sqrt{v_{0}+v}}-\sqrt{v_{0}}+\frac{2v}{\sqrt{v_{0}+v}}-\sqrt{2v}\right]\\ & = \eqref{eq : efrechetintegral}.\end{aligned}\ ] ] now recall that substituting , we obtain where , in view of lemma [ lem : frechetintegral ] , on can show that our next goal is to calculate where the expectation is taken under . usingthe fact that is 1-frchet with scale lemma [ lem : efrechetintegral]_(ii ) _ implies thus , in view of , becomes this , in view of implies and completes the proof . | max - stable random fields provide canonical models for the dependence of multivariate extremes . inference with such models has been challenging due to the lack of tractable likelihoods . in contrast , the finite dimensional cumulative distribution functions ( cdfs ) are often readily available and natural to work with . motivated by this fact , in this work we develop an m - estimation framework for max - stable models based on the _ continuous ranked probability score _ ( crps ) of multivariate cdfs . we start by establishing conditions for the consistency and asymptotic normality of the crps - based estimators in a general context . we then implement them in the max - stable setting and provide readily computable expressions for their asymptotic covariance matrices . the resulting point and asymptotic confidence interval estimates are illustrated over popular simulated models . they enjoy accurate coverages and offer an alternative to composite likelihood based methods . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore |
we gather u.s .mortality rates from www.cdc.gov and aggregate search data of u.s .citizens from www.google.com .the mortality data is annual , and google search data is weekly , so in order to make google search data annual , we take the yearly average . then , we want to measure the change in mortality correlated with change in average search volume , because we want to know if next years mortality rates will increase or decrease , and by how much .we then take logs of the differenced data since it is a monotone transformation and reduces variance in the data ( the purpose of this paper is to show if correlations exist - further methods can be employed to find how strong they are ) . + .the line outside of the significant band at shows that there is a significant correlation at lag .,width=288 ] from figure 1 suggests a correlation between how mortality moves and how hate moves one year in the future .the time series plot checks out with figure 1 and suggests that hate should increase at time 6 to 7 , since mortality rate increased from time 5 to 6 . ,width=288 ] table 1 shows where we found significant cross - correlations and at what lag .an x denotes that there were no significant lags .the significant lags at for all people and whites for foreclosure and hate suggests that mortality rates can predict how much people search for those words .the black female data had very strange results , which may suggest that their mortality data in the period that we have is an outlier compared to how their mortality rates normally behave .the only positive lag correlation was with black females and `` ship , '' which , in theory , could suggest that when more people are searching for `` ship , '' more black females are dying , but since the black female data produced strange results , it is most likely that their data is unreliable .more data should be needed to help with this claim .+ unfortunately there are no other significant lags with a positive value , suggesting that none of these words can help predict mortality rates . on the other hand , since for both `` foreclosure '' and `` hate ''we found significance at lag , it may suggest psychological or sociological evidence into how humans react to financial failure and death . perhaps more people search for `` hate '' when white people die , but a black person s death has no effect on how often the word `` hate '' is searched for ..table indicating which lags produced statistically significant results .= male , and = female .where applicable , the superscripts indicate the sign of the correlation . [ cols="<,^,^,^,^,^,^,^,^,^",options="header " , ]the data was very limited which caused many problems .mortality data was annual , and the latest data point was for 2010 .google search data only went back to 2004 , so we had 7 points of data , and differencing these gave us only 6 changes .we are also doing multiple testing at once , so we should account for some sort of test correction to handle the probability of type i error , such as the bonferroni test correction .there are essentially 81 tests being performed , however , so in order to get some preliminary results we forego these test corrections , as the purpose of this paper is to demonstrate how this method can be used to find correlations and show that there may be significance .in addition , reducing the number of tests is also possible , since we divided up the analysis into several groups races and sexes . +this method could be repeated removing outlier data to possibly improve the result .for example , the black women data acted strangely , which may suggest the data we had for this time period did not follow normal mortality trends .if we remove black women from the overall mortality data , we may get more accurate conclusions .+ in addition , in the future our analytical methods are more practical .for example , 10 years from now we will have more than double the data points we had previously , causing for a huge improvement in accuracy and potential results .in addition , if more detailed mortality data exists , we can test for correlations between certain words and age groups . as an example, we could find positive correlation between people searching for `` alcohol '' and deaths for people in the age range of 15 - 25 .+ although it will probably not be possible , if mortality data was available weekly , it would be tremendous for data analysis , since we will have 52 times as many data points .we did find significant evidence at negative lags even though our data provided no significant conclusions for positive lag cross - correlations , this opens two new doors .the first being that maybe we havent found the right word , and given the correct word we can predict mortality rates .the second being that we did find significant evidence for negative lags , providing psychological or sociological insight as to how mortality rates affect our searching on google .further work can definitely be done to find correlations with mortality , but even aside from that it provides more evidence as to how our lives affects google search data and how google search data affects our lives .1 . preis , t. , d. reith , and h. e. stanley .`` complex dynamics of our economic life on different scales : insights from search engine query data . ''_ philosophical transactions of the royal society a : mathematical , physical and engineering sciences _ 368.1933 ( 2010 ) : 5707 - 719centers for disease control and prevention . _ centers for disease control and prevention . web .27 apr . 2012 .< http://www.cdc.gov/>. 3 .`` google insights for search . ''_ google . _ web .+ < http://www.google.com / insights / search/>. | inspired by correlations recently discovered between google search data and financial markets , we show correlations between google search data mortality rates . words with negative connotations may provide for increased mortality rates , while words with positive connotations may provide for decreased mortality rates , and so statistical methods were employed to determine to investigate further . tatistically significant correlations between google search data and financial markets exist . the methods used can be used to attempt to find correlations between any time series data and google searches . since insurance premiums are based on mortality rates , if one can predict mortality rates with a point or interval estimator , an insurance company can re - evaluate a policy every year and update the premium based on what search data may suggest in addition to standard pricing methods . + as google search data and mortality rates are time series data , we can use cross correlations ( defined below ) , to determine if correlations exist and we can show where they exist . using r , we can use the ccf ( ) ( cross - correlation function ) command to easily graph and find significant values depending on a significance level ( we choose a liberal ) . in figure 1 , we graph a cross correlation function of all races both sexes and the word hate as demonstration . these were the graphs used in the statistical analysis of the project . figure 2 shows how the significant cross - correlation relates to how the graphs work . + nine words were used for testing . these words are chosen somewhat arbitrarily , but with intentions to hopefully be correlated with mortality . for example , the word `` hate '' was used and may imply that if more people are hateful ( increased search for hate ) then mortality rates may increase ( due to hate related crimes , for example ) . + in addition , we search nine groups of mortality rates . there is a generic group ( all sexes all races ) and then groups divided into male and female , and racial groups divided into white and black . the reasoning for this is that it will both let us determine if there is any outlier data , and will also let us see if there are correlations inside of a specific group . it is possible that one race or one sex may react differently to a certain word based on their culture . + as discussed above , we present the cross correlation function : -e[x(t])e[y(t+\delta t)]}{\sqrt{e[x^2(t)]-e[x(t)]^2}\sqrt{e[y^2(t+\delta t)]-e[y(t+\delta t)]^2}}.\ ] ] essentially it checks for relationships between two time series and , but at different times , for example if , it will check for s relationship with s relationship . the denominator simply scales the numerator so that . |
in present paper we study the local time with respect to spatial variable of solution to the following white noise driven heat equation here is a 1-dimensional gaussian space - time white noise in since is a random generalized function on the equation is understood in the sense of generalized functions . can be considered as a centered gaussian random measure with independent values on the disjoint subsets of some times it is called by brownian sheet .it is known that solution to is given by where the stochastic integral with respect to brownian sheet is defined as an ito integral by means of the isometry here the interest in the equation arises since it describes the random motion of the string .for example , m.hairer in considered the following model .take particles with positions immersed in a fluid and assume that nearest - neighbours are connected by harmonic springs .if the external force is acting on the particles , then the equations of motion in the overdamped regime have the following representation where is a constant factor characteristic of the spring : its stiffness .this constant appears from hooke s law .the described model can be considered as a primitive model for a polymer chain consisting monomers and without self - interaction .it does not take into account the effect of the molecules of water that would randomly `` kick '' the particles .assume that these kicks occur randomly and independently at high rate .this effect can be modelled by increments of independent wiener processes as follows formally taking the continuum limit ( with the scalings and ) as one can see that this system is well - described by the solution to a stochastic partial differential equation more general white noise driven heat equation was considered by t.funaki in . for given two functions and let be a diffusion process on determined by the stochastic differential equation where is a -dimensional wiener process .funaki introduced the following , { \mathbb r}^d) ] here is a -dimensional gaussian white noise with two parameters in \times[0;1]). ] is said to be an integrator if there exists the constant such that for an arbitrary partition and real numbers the following relation holds this inequality allows to define a stochastic integral for any square integrable function with respect to the integrator .the following statement describes the structure of all gaussian integrators .[ lem1.1] the centered gaussian process ]and a continuous linear operator in the same space such that } , \xi),\ t\in[0;1].\ ] ] it follows from lemma [ lem1.1 ] that all properties of the gaussian integrator can be characterized in terms of the operator since we are going to study the local time for the solution to the equation , let us recall the general definition of local time for a 1-dimensional random process . ] and a white noise in the same space such that the process ] suppose that let us check that there exists a constant such that for any ) ] note that applying the cauchy inequality one can obtain the following relation it follows from that in the we prove that the gaussian integrator generated by a continuously invertible operator has a local time at any real point which is jointly continuous in time and space variables . consequently , the gaussian integrator ] and the partition such that for any every gaussian integrator ] the answer is given by the next theorem .consider the partition such that for any then one can written the following relation denote by then to prove the theorem it suffices to check that since then follows from the existence of finite limit as let us check it .one can see that since ] and a white noise in ) ] which is isomorphic to denote by where is isomorphism .then ) ] is the gaussian process , then the random variables and are jointly gaussian .the random variables and are independent .it implies that the random variables and are jointly gaussian .hence , the random variable and are jointly gaussian .to continue the proof of the theorem we need the following statement which can be easily proved .[ lem2.2 ] let be a white noise in the space ). ] and the random variable that is independent of the white noise , such that admits the representation applying lemma 2.2 to the white noise and the random variable one can see that can be represented as follows } , \widetilde{\xi})-(h,\widetilde{\xi})-\zeta)\cdot\ ] ] } , \widetilde{\xi})-(h,\widetilde{\xi})-\zeta)d\vec{v},\ ] ] where denote by the projection onto linear span generated by then possesses the representation } , \widetilde{\xi } ) -(h,\widetilde{\xi})-\zeta\big)\cdot\ ] ] } , \widetilde{\xi } ) -(h,\widetilde{\xi } ) -\zeta)/ ( p_h\widetilde{a}{1\!\!\,{\rm i}}_{[u_k ; v_i ] } , \widetilde{\xi}),\ i=\overline{1 , 2p},\ ( h,\widetilde{\xi}),\ \zeta\big\ } d\vec{v}.\ ] ] denote by let be the gramian matrix constructed by elements put },\ldots,(i - p_h)\widetilde{a}{1\!\!\,{\rm i}}_{[u_k ; v_{2p}]})+i(\varepsilon_1 , \varepsilon_2),\ ] ] }-(h , \widetilde{\xi})-\zeta\\ \vdots\\ p_h\widetilde{a}{1\!\!\,{\rm i}}_{[u_k ; v_{2p}]}-(h , \widetilde{\xi})-\zeta \end{pmatrix}\ ] ] then equals it follows from that to end the proof of the theorem one must check that the integral } , \ldots , ( i - p_h)\widetilde{a}{1\!\!\,{\rm i}}_{[u_k ; v_{2p}]})}}\ ] ] converges .really , for any } , \ldots , ( i - p_h))\widetilde{a}{1\!\!\,{\rm i}}_{[u_k ; v_{2p}]}}}\cdot\ ] ] } , \ldots , ( i - p_h))\widetilde{a}{1\!\!\,{\rm i}}_{[u_k ; v_{2p}]})\vec{\theta},\vec{\theta})}\ ] ] as since for any less or equal to } , \ldots , ( i - p_h))\widetilde{a}{1\!\!\,{\rm i}}_{[u_k ; v_{2p}]}}},\ ] ] then using and applying the lebesgue dominated convergence theorem one can obtain the statement of the theorem . therefore , let us check . to do this we need the following statements which are related to the properties of the gram determinant in the integrand .those statements were proved in the works .let be the finite - dimensional subspace of the space ). ] then ( 16 ) can be represented as follows } , \ldots,\widetilde{a}{1\!\!\,{\rm i}}_{[u_k;v_p ] } , \widetilde{a}\widetilde{a}^{-1}\frac{h}{\|h\| } \big).\ ] ] one can check that the following theorem holds .it follows from theorem [ thm2.3 ] and properties of the determinant that greater or equal to } , \ldots , { 1\!\!\,{\rm i}}_{[u_k , v_p ] } , \widetilde{a}^{-1}h).\ ] ] denote by the subspace of all step functions in let be an orthonormal basis for and be an orthonormal basis for the orthogonal complement of in note that is an orthonormal basis for one can see that the following estimates take place .[ lem2.3] there exists a constant such that the following relation holds } , \ldots , { 1\!\!\,{\rm i}}_{[0 , t_k ] } , f_1 , \ldots , f_s , e_1 , \ldots , e_m)\geq c\ g({1\!\!\,{\rm i}}_{[0;t_1 ] } , \ldots , { 1\!\!\,{\rm i}}_{[0 , t_k ] } , f_1 , \ldots , f_s).\ ] ] [ lem2.4 ] let be the points of jumps of functions then there exists a constant which depends on such that the following relation holds } , \ldots , { 1\!\!\,{\rm i}}_{[0 , t_k ] } , f_1 , \ldots , f_s)\geq c_{\vec{s}}\ g({1\!\!\,{\rm i}}_{[0;t_1 ] } , \ldots , { 1\!\!\,{\rm i}}_{[0 , t_k ] } , { 1\!\!\,{\rm i}}_{[0;s_1 ] } , \ldots , { 1\!\!\,{\rm i}}_{[0 , s_n]}).\ ] ] if is not a step function , then it follows from lemma [ lem2.3 ] that } , \ldots,{1\!\!\,{\rm i}}_{[u_k ; v_p ] } , \widetilde{a}^{-1}h)\geq c\ g({1\!\!\,{\rm i}}_{[u_k ; v_1 ] } , \ldots,{1\!\!\,{\rm i}}_{[u_k ; v_p]}).\ ] ] the relation implies that less or equal to },\ldots,{1\!\!\,{\rm i}}_{[u_k ; v_{2p } ] } ) } } = \ ] ] here is positive constant which depends on and if is a step function , then it follows from lemma [ lem2.4 ] that } , \ldots,{1\!\!\,{\rm i}}_{[u_k ; v_{2p } ] } , \widetilde{a}^{-1}h)\geq c_{\vec{s}}\ g({1\!\!\,{\rm i}}_{[u_k ; v_1 ] } , \ldots,{1\!\!\,{\rm i}}_{[u_k ; v_{2p}]},{1\!\!\,{\rm i}}_{[u_k ; s_1 ] } , \ldots,{1\!\!\,{\rm i}}_{[u_k ; s_n]}),\ ] ] where are the points of jumps of the function the relation implies that less or equal to } , \ldots,{1\!\!\,{\rm i}}_{[u_k ; v_{2p}]},{1\!\!\,{\rm i}}_{[u_k ; s_1 ] } , \ldots,{1\!\!\,{\rm i}}_{[u_k ; s_n]})}}.\ ] ] consider the following partition of the domain where note that it follows from that to finish the proof it suffices to check that the convergence of the integral we proved in . for convenience of the reader ,let us now briefly recall essential points of the proof .the change of variables in the integral implies that } , \ldots,{1\!\!\,{\rm i}}_{[0;v_k]},{1\!\!\,{\rm i}}_{[0;1]})}}=\ ] ] } , \ldots,(i-\widetilde{p}){1\!\!\,{\rm i}}_{[0;v_k]},{1\!\!\,{\rm i}}_{[0;1]})}},\ ] ] where is a positive constant which depends on and is a projection onto linear span generated by }. ] is the brownian bridge .to finish the proof it suffices to check that really , note that it is known that joint probability density of the random variables and has the following representation then where is the gamma function .99 d. khoshnevisan , _ five lectures on brownian sheet : summer intership program university of wisconsin - madison _ , 2001 , 61 p. j.b.walsh , _ an introduction to stochastic partial differential equations _ , lecture notes in math . 1180 ( 1986 ) , 265 - 439 .r.cairoli , j.b.walsh , _ stochastic integrals in the plane _ , acta mathematica 134 ( 1975 ) , 111 - 183 .m.hairer , _ an introduction to stochastic _ , pdes ,the university of warwick , 2009 , 78 p. t.funaki , _ random motion of strings and related stochastic evolution equations _ , nagoya math.j .89 ( 1983 ) , 129 - 193 .m.kardar , g.parisi , y .- c.zhang , _ dynamic scaling of growing interfaces _ , physical reviev letters 56 ( 1986 ) , no . 9 , 889 - 892 .a.a.dorogovtsev , o.l.izyumtseva , _ self - intersection local time _ , ukrainian math .journal 68 ( 2016 ) , no.3 , 291 - 341 .a.a.dorogovtsev , o.l.izyumtseva , _ self - intersection local times for gaussian processes _ , germany : lap lambert academic publishing , 2011 , 152 p. a.a.dorogovtsev , _ stochastic integration and one class of gaussian random processes _ ,ukr.math.journal 50 ( 1998 ) , no.4 , 495 - 505 .o.l.izyumtseva , _ on the local times for gaussian integrators _ , theory of stochastic processes 19 ( 35 ) ( 2014 ) , no.1 , 11 - 25 .a.a.dorogovtsev , o.l.izyumtseva , _ properties of gaussian local times _ , lithuanian mathematical journal 55 ( 4 ) ( 2015 ) , 489 - 505 .a.a.dorogovtsev , o.l.izyumtseva , _ on self - intersection local times for generalized brownian bridges and the distance between step functions _ ,theory of stochastic processes 20 ( 36 ) ( 2015 ) , no.1 , 1 - 13 .o.l.izyumtseva , _ moments estimates for local times of a class of gaussian processes _ , communications on stochastic analysis 10 ( 2016 ) , no.1 , 97 - 116 .a.n.borodin , _ brownian local time _ , russian math .surveys 44 ( 1989 ) , no.2 , 1 - 51 . | in this article we discuss the existence of local time for a class of gaussian processes which appears as the solutions to some stochastic evolution equations . we show that on small intervals such processes are gaussian integrators generated by a continuously invertible operators . this allows us to conclude that the considered processes have a local time on any finite interval with respect to spatial variable . |
the bayesian approach to parametric model selection requires the specification of a prior probability distribution over the parameter space .the jeffreys prior , which is proportional to the square root of the determinant of the fisher information computed in the parameter space , has been shown to be the uniform prior over all _ distributions _ indexed by the parameters in a parametric family .geometrically , its integral over a region of the parameter space computes a volume that essentially measures the fraction of statistically distinguishable probability distributions within that region . in this interpretation , the jeffreys prior distribution where simply measures the fractional volume of the small element relative to total volume of the parametric manifold . here is the fisher information on the parameter space and is the standard riemannian volume element on .the volume also appears in the minimum description length ( mdl ) approach to model selection , conceptually because it effectively measures how many different distributions are describable by different parameter choices .an important difficulty in applying the mdl approach to model selection occurs when the parameter space is noncompact and the volume diverges . in this case , from the bayesian perspective , a uniform prior on the parameter space does not exist , while from the mdl perspective the number of models that might be describable diverges , leading to problems with the definition of the description length .of course the parameter space can be cut off by hand , but unless the choice of cut - off is well founded , it can lead to artifacts in the comparison of different model families . unfortunately in many practical problems the parameter space _ is _ noncompact and diverges .for example , in astrophysics , the detection of exoplanets depends on a model of the light coming from the occluded star .this model will contain a non - compact direction representing the orbital period of the planet see , e.g. , . for examples from psychophysics see , e.g. , . in this notewe argue that merely specifying the experimental set - up before the measurement of any actual data influences the prior distribution on the parameter space .this occurs because , given the finite number of measurements in any experiment , many of the probability distributions indexed by a parametric manifold will be statistically indistinguishable . in cases where the parameter space is noncompact ,the uniform prior conditioned on the experimental setup can thus become well - defined . in the geometric language of , the volume that measures the number of probability distributions in the parametric family that are statistically distinguishable given a _finite _ number of measurements can be finite even if the parameter space is non - compact . in effect , specifying the experimental set - up can render the parameter space compact .our results illustrate how the choice of experimental set - up influences the measure on the parameter space of a model , thereby affecting which model is regarded as most ` correct ' . in section [ general - argument ]we briefly review the computation of posterior probabilities , and consider the effect of conditioning on the experimental set - up on the parameter space measure . in section [ sec :example ] we apply these considerations to a physical problem : the analysis of light - curves of stars with orbiting planets . in this examplewe see that the volume of the parameter space is rendered effectively finite after the experimental set - up is specified .[ general - argument ] suppose one is interested in some physical phenomenon , and has made relevant measurements : .further suppose that there are two different parametric models , and , that aim to describe the phenomenon in question .the basic question to be answered is which of the two models is the better one , considering the experimental data .the probability - theoretic answer to this question is to compute the posterior probabilities and , which we can write using the bayes rule as where is the vector of variables parametrising , and is the volume form associated to the measure on the parameter space , which we will define shortly .a corresponding expression can also be written for .since we wish to compare and , we can ignore the common factor , and we will assume and drop this factor as well .thus the only remaining ingredient to be defined is the volume form ; we simply quote the result from : the volume form that gives equal weight to all statistically distinguishable distributions in the parametric family is where is the _ fisher information matrix _ , defined as the second derivative of the kullback leibler distance : where is the integration measure over the sample space , and is the distribution function associated to the values of the parameters .now we have defined everything needed to compute the posterior probabilities , and we illustrate the formalism by applying it to the analysis of light - curves . using this , we can compute the fisher information matrix by computing the kullback leibler distance between two nearby points and taylor expanding : on the third line , the terms linear in vanish , as exchanging the order of integration and derivation , the integral of will yield a constant 1 , which then differentiates to zero . the measure ( [ measure ] ) is independent of the experimental data and is constructed under the assumption that the entire sample space can be measured by the observer .however , in real experiments , instrumental and design limitations only allow observation of some subset of the sample space .thus an observation either results in no detected outcome , or in a measurement .thus the effective predicted distribution of measured outcomes is not the , but rather where .we will argue that if the models in the asymptotic regions of a noncompact parameter space differ in their predictions mostly outside the observable region , the fisher information for the effective distributions ( [ effdist ] ) can decay sufficiently quickly to render the volume finite . in this sectionwe will give one set of sufficient conditions for this to happen and in sec .3 we will give a detailed example . consider a model , specified by parameters , and a distribution , with .we will slightly simplify notation simply referring to the distribution as and understanding the implicit parameter dependence .let us use spherical coordinates in the parameter space with being the radial coordinate , i.e. . also consider an experimental set - up that can only make measurements inside some compact region .thus , the probability of no measurement being registered by this experiment is .our first assumption is a smoothness condition , so that inside the region the distribution does not fluctuate too much as one approaches the asymptotics of parameter space : where goes to zero as goes to infinity ; we will later specify the exact scaling needed .intuitively , this condition says that as the parameter , the models do not differ too much inside the observable part of the sample space .this allows us to estimate where vol denotes the volume of the compact region .secondly we assume that inside , the distributions do not decay too quickly as .intuitively , since any experiment will only measure a finite amount of data ( say points ) , if the probability of a single measurement lying inside is significantly less than , then the experimental set - up will not detect anything .thus we will require where again we will later specify the scaling of with .decays too quickly as , then the models in the asymptotic region of the parameter space make no measurable predictions for experiments designed with a finite number of measurements .the example in the sec .3 will illustrate such a scenario . ] using these assumptions , we can establish an upper bound for the fisher information ( [ eq : fishertheor ] ) : thus the determinant of the fisher information scales as and for the integral to be finite one must have suppression stronger than .thus the integral converges if is suppressed more strongly than from the experimental set - up one can estimate how scales with , which then determines how needs to scale for the integral to converge .this is thus a sufficient condition for rendering the parameter space effectively finite .it is worth stressing that , following the above analysis , any method of deciding the validity of a model is impacted by the choice of the experiment in a completely computable way , and this should be taken into account when designing experiments .[ sec : example ] consider a star orbited by a planet so that the planet periodically passes between the star and earth .the light output ( light - curve ) of such a star is a constant line , with a small periodic dip when the planet is eclipsing part of the star .one model for such a light - curve was proposed in as , \label{eq : curve}\ ] ] where an example light - curve is shown in figure [ fig : curve ] ; is the period of the planet ; is the duration of the transit , i.e. how long the planet eclipses the star ; is the depth of the dip in the curve ; is the total observed brightness of the star ; and is a phase parameter specifying when the planets transit occurs . finally , is a constant parameter specifying the sharpness of the edges of the light - curve , expected to be fairly large as the transition between transit / no - transit is relatively quick .the assumption greatly simplifies our analysis , and is not physically very restrictive .the parameter space for this model is clearly non - compact as can range to infinity .however , we will argue that the space is effectively rendered compact after the experimental set - up is specified . to be precise, the parameter space is to be a constant , not a parameter . ] : , \quad \tau \in [ 0,t ] , \quad \eta \in [ 0,\delta t ] , \quad b \in [ 0,b_{max}],\ ] ] where is a small number that we will estimate , and the maximal brightness is naturally given by the brightness of sirius , the brightest star visible from earth . assuming a circular orbit as in figure [ fig : exo ] , the ratio of the transit time to the period of the planet is given by for the currently known transiting exo - planets this ratio is around , although for a typical system one expects it to be smaller as large planets orbiting close to the star are easier to observe , which favors largest values of the ratio . for an elliptical orbit, the answer will differ by an factor , but will have the same dependence on .thus , will always be a small fraction of . with an average distance .] now we can write down the probability density for measuring values for the light - curve at times with the light - curve specified by parameters as where we have assumed that the uncertainty in each measurement is gaussian , and further we have chosen the standard deviation to be equal for all measurements for simplicity . using ( [ gauss ] ) in the formula ( [ eq : fishertheor ] ) , we see that the the integrals in the fisher information are gaussian in ; thus we can compute them analytically to get this is our key formula , and we shall spend the next subsection analysing its properties .[ finiteness ] we now wish to apply the general arguments of section [ general - argument ] to the exo - planet system .consider an experimental set - up that can barely measure two periods , and then consider shortening the experiment slightly so that only one dip is detected ; this is depicted in figure [ fig : curve ] . to be precise , the shorter set - up measures the beginning and end of a transit at and , points in between , and points after the transit .the longer set - up makes measurements at the same times , and additionally at times and , detecting the second transit . in the next subsectionswe will show that , indicating that detecting the second dip is of fundamental importance to experimental design ; without the second dip the experimental set - up ca nt differentiate models with large enough .this renders the parameter space effectively finite , as an experiment can not differentiate between models that have period larger than the duration of the experiment .[ sec : estimate ] in this subsection we will give an estimate for the magnitude of the determinant of the fisher information , and show how it is affected by the inclusion of the second transit in the data .in subsequent subsections we will exactly compute the determinant for a few specific experimental set - ups . from ( [ eq : fisher ] ) and the definition of a determinant, we see that in each term of the determinant each parameter appears exactly twice in the derivatives , i.e. each term is of the form as a rough estimate of the size the determinant , we investigate how large terms of this type can be .the derivatives are from ( [ eq : f ] ) we see that only when , again assuming large .this tells us that the measurements that contribute most to the fisher information are the ones on the edges of the dips ; for our current purposes it is sufficiently accurate .] , i.e. at times and in figure [ fig : curve ] .we write the condition as and note that the ratio of transit time to period is very small , .this gives us the solutions where is an integer indexing the number of the dip , with denoting the solitary dip if only one is present in the data .we wish to estimate the ratio of the determinants of the fisher information by an order of magnitude estimate where both the numerator and the denominator are of the form ( [ generic ] ) , and according to the argument above the maximal contributions come from the edge measurements . from ( [ dd]-[deta ] )we see that the derivatives with respect to and are all periodic at the edges : for , and thus will cancel in the ratio ( [ ratio ] ) .it is crucial that , however , is not periodic due to the second term in ( [ dt ] ) . at the first dip , , we expand ( [ dt ] ) to find while at the second dip , , the contribution is ignoring signs that are irrelevant for this estimate .thus we see that the fisher information increases strongly as the second dip is included : where we ignored order one coefficients .this is an explicit example of how our arguments from section [ general - argument ] work for a realistic model : when an experimental set - up does not have the capability to detect two dips , it becomes impossible to determine the period , and consequently the fisher information is very small ( or vanishing ) compared to an experiment that is able to detect two dips and determine the period more accurately . for any given experiment of finite duration , the fisher information will decline with when effectively rendering the parameter space compact .to verify our claim that the parameter space is really rendered compact we need to show that det strongly enough as is taken to infinity .it is easy enough to find the -scaling of the derivatives ( [ dt]-[deta ] ) ; scales as , while the others stay finite in the large limit .thus , as seen from ( [ generic ] ) , the determinant will scale as which shows that that the parameter space measure vanishes fast enough for large to render the parameter space volume finite .[ sec : dets ] while the order of magnitude estimate of the previous subsection offers an intuitive reason as to why the fisher information decreases sharply when the number of peaks detected falls below two , it is still instructive to explicitly compute the determinant in a few experimental set - ups .[ [ detecting - two - dips ] ] detecting two dips : + + + + + + + + + + + + + + + + + + + let us first consider the case from section [ finiteness ] , i.e. measurements at times indicated in figure [ fig : curve ] . using the derivatives ( [ dt]-[deta ] ) one can write down the fisher information matrix ( [ eq : fisher ] ) as where for brevity we defined in computing this matrix we used that for , which is true up to corrections of order , as seen from ( [ eq : f ] ); for this reason one does not need to specify the exact times of the measurements during the dip , or the measurements outside the dip , as up to corrections they all contribute equally .the determinant of the fisher information is simple , this result explains the subtlety referred to earlier : although measurements at the edges contribute the most to the fisher information , if one only has measurements at the edges ( ) the fisher information actually vanishes .physically this is easy to interpret , as only measuring the edges will yield four points lying on a line , and thus they can not be used to determine any information about the curve ; other data points are needed to ` anchor ' the data . [ [ detecting - only - one - dip ] ] detecting only one dip : + + + + + + + + + + + + + + + + + + + + + + + similarly one can compute the fisher information in the ` short ' experimental set - up , where measurements are made at the same times as before , except not at and .this yields and perhaps surprisingly the determinant vanishes : det , up to tiny corrections .this indicates that the estimate in section [ sec : estimate ] was an overestimate : terms in the determinant of are of the magnitude estimated , but the determinant is arranged in such a way that the terms cancel to a high accuracy , and the compactness of the parameter space is strengthened .our analysis has shown how the specification of an experimental design affects the measure on model parameter spaces in mdl model selection ( or equivalently the prior probability distribution on parameters in the bayesian approach ) .interestingly , the finite number of measurements within a bounded sample space in any practical experiment can effectively render a non - compact parameter space compact thereby leading to a well - defined prior distribution ( [ measure ] ) .our analysis could be turned around to design experiments to discriminate well between models in some chosen region of the parameter space by ensuring that the fisher information ( [ eq : fisher ] ) is large in the desired region .it would also be useful to determine general conditions under which experimental design effectively makes model parameter spaces compact , perhaps following the arguments of sec . 2 .this paper was written in honor of jorma rissanen s 75th birthday and his many seminal achievements in statistics and information theory .vb and kl were partially supported by the doe under grant de - fg02 - 95er40893 , and kl was also partly supported by a fellowship from the academy of finland .vb was also partly supported as the helen and martin chooljian member at the institute for advanced study .v. balasubramanian , `` statistical inference , occam s razor and statistical mechanics on the space of probability distributions , '' [ arxiv : cond - mat/9601030 ] , v. balasubramanian , `` a geometric formulation of occam s razor for inference of parametric distributions , '' [ arxiv : adap - org/9601001 ] .myung , v. balasubramanian and m.a .pitt , `` counting probability distributions : differential geometry and model selection '' , proceedings of the national academy of science , 97(21 ) 1117011175 , 2000 . | we apply the minimum description length model selection approach to the detection of extra - solar planets , and use this example to show how specification of the experimental design affects the prior distribution on the model parameter space and hence the posterior likelihood which , in turn , determines which model is regarded as most ` correct ' . our analysis shows how conditioning on the experimental design can render a non - compact parameter space effectively compact , so that the mdl model selection problem becomes well - defined . |
a major impediment that prevents pressurized water reactors ( pwrs ) from operating at higher duty and longer cycles is the accumulation of boron within metal oxide scales that deposit on the upper spans of fuel assemblies .boron , in the form of boric acid ( h ) , is added to pwr coolant to control the neutron flux while lithium hydroxide ( lioh ) is also dosed to control the acidity of the coolant and to reduce corrosion . because of the large neutron absorption cross section of , a small amount of accumulated b is sufficient to cause an abnormal decrease in the neutron flux , which shifts the power output toward the bottom half of the reactor core .this phenomenon , known as axial offset anomaly ( aoa ) , has been observed in high - duty reactors that run long fuel - cycles . in extreme form , aoa can decrease the reactor shutdown margin sufficiently to force major power reduction leading to substantial economic losses .aoa modeling efforts traditionally assume that boron deposition within the metal oxide scales ( which are commonly referred to as crud , an acronym for chalk river unidentified deposits ) occurs predominantly through precipitation of lithium borate compounds such as libo , li , li .although the retrograde solubility of these borates could explain the `` lithium return '' experienced during plant shutdown , they have never been observed experimentally in pwr crud .using mssbauer spectroscopy together with xrd on crud scrapes recovered from high duty pwrs , sawicki identified the precipitation of ni as a possible mechanism for b deposition .mesoscale crud models developed by short __ , assume that supersaturation of boric acid leads to precipitation of boron trioxide ( b ) within the crud . in recent work, we combined _ ab initio _ calculations with experimental formation enthalpies to investigate the incorporation of b into the structure of nickel ferrite ( nife , nfo ) as a potential new mechanism for b deposition within crud . assuming solid - solid ( and solid - gas ) equilibrium between nickel ferrite and elemental reservoirs of fe , ni ,b ( and o gas ) we found that it is thermodynamically favorable for b to form secondary phases with fe , ni , and o ( _ e . g. _b , fe , and ni ) instead of entering the nfo structure as a point defect .building on previous works , the present study attempts to deal with the same question , however , here the defect formation energies are evaluated assuming solid - liquid equilibrium between nfo and the surrounding aqueous solution of ni , fe and dissolved boric acid ( h ) . to set up solid - liquid equilibrium ,the chemical potentials of individual aqueous species are defined as a function of temperature , pressure , and concentration and are linked to the chemical potentials of the ionic species in solid .this new scheme allows for the evaluation of defect formation energies under conditions that are specific to operating nuclear pwrs .the approach is quite general and applicable to a large variety of solids in equilibrium with aqueous solutions .the first - principles calculations required for the present study have been carried out within the density functional theory ( dft ) using the same computational parameters and crystal models that are specified in ref . .the formation of a defect in a crystalline solid can be regarded as an exchange of particles ( atoms and electrons ) between the host material and chemical reservoirs .the formation energy of a defect _d _ in charge state _q _ can be written as in eq ., and are the total energies of the defect - containing and defect free solids , calculated within the dft .the third term on the right side of eq . represents the change in energy due to the exchange of atoms between the host compound and the chemical reservoir , where is the atomic chemical potential of the constituent _ i _ ( _ i _ = ni , fe , or b ) .the quantity represents the number of atoms added to or removed from the supercell .the quantity is the fermi energy referenced to the energy of the valence band maximum ( vbm ) of the defective supercell , .this value is calculated as the vbm energy of the pure nfo , corrected by aligning the core potential of atoms far away from the defect in the defect - containing supercell with that in the defect free supercell .the quantity represents the charge state of the defect , _i. e. _ the number of electrons exchanged with the electron reservoir with chemical potential . under solid - liquid equilibrium conditions , the chemical potentials of the ionic species in the solid, , are equal to the chemical potential of the aqueous species in the saturated solution , .to derive an expression for defect formation energy , eq .has to be written in terms of ionic chemical potentials , , instead of atomic chemical potentials , .this can be accomplished by adding and subtracting the term from eq . .this term can be interpreted as the energy necessary to exchange electrons between the electron reservoir and the atomic species in the pure nfo . if we combine this energy with the atomic chemical potential ( third term in eq . )we obtain the ionic chemical potential of species in nfo : in eq . represents the ionic charge , _ i. e_. the number of electrons exchanged with the electron reservoir to create the ionic species in nfo , and is the energy of the vbm in the pure nfo to which the energy of the electron reservoir is referenced . using eq .together with the solid - liquid equilibrium condition , , the defect formation energy becomes : where is the energy difference between the defect containing and defect free supercells .therefore , to calculate the defect formation energy , the chemical potentials of the aqueous species have to be evaluated .the scheme described above has the advantage that it decouples the ionic charge from the charge state of the defect ; charge neutrality is achieved through exchange of electrons with the electron reservoir with energy equal to the fermi level .the chemical potential of the aqueous ions , , can be written as the sum of the standard chemical potential , , and a temperature dependent term in eq . , is the universal gas constant and is the activity of the ionic species in the aqueous solution . in the present case , because nfo is weakly soluble in water , the activity of the ionic species can be approximated by the concentration of ions in the solution .the gibbs energies of formation required for eq . are obtained from that supcrt database which uses the revised helgeson - kirkham - flowers equation of state to predict the thermodynamic behavior of aqueous species at high temperature and pressure .the chemical potentials of the solid phases are usually approximated by the total energy per atom of the elemental solid calculated within the dft framework .however , as pointed out in earlier work , this approach suffers from incomplete error cancellation when total dft energies of physically and chemically dissimilar systems are compared . therefore , to compute the elemental - phase chemical potentials of the fe , ni , b , and o , we extend the database of 50 elemental energies published by stevanovic _et al . _ to include b. to do this we add 26 b - containing binaries to the large fitting set of 252 compounds that have been used by stevanovic _et al . _ and solve the overdetermined system of 278 equations for 51 elements using a least - square approach , as described in refs . .the calculated dft energies and experimental formation enthalpies of the 26 b - containing binaries are listed in table [ table1 ] , while the 51 elemental - phase chemical potentials are given in table [ table2 ] in the appendix ..[table1 ] dft energies and experimental enthalpies of formation of 26 b - containing binaries that have been added to the fitting set in ref . , to calculate the elemental - phase chemical potentials .theoretical enthalpies of formation are also listed . [ cols="<,^,^,^",options="header " , ] _ pwr axial offset anomaly ( aoa ) guidelines , revision 1 _ , epri , palo alto , ca , 1008102 ( 2004 ) _ rootcause investigation of axial power offset anomaly _ ,epri , palo alto , ca , tr-108320 ( 1997 ) . _ modeling pwr fuel corrosion product deposition and growth processes _ , epri , palo alto , ca , 1009734 ( 2004 ) . _ modeling pwr fuel corrosion product deposition and growth process : final report _ ,epri , palo alto , ca , 1011743 ( 2005 ) ._ axial offset anomaly ( aoa ) mechanism verification in simulated pwr environments _ , epri , palo alto , ca , 1013423 ( 2006 ) ._ pressurized water reactor ( pwr ) axial offset anomaly mechanism verification in simulated pwr environments _ , epri , palo alto , ca , 1021038 ( 2010 ) .s. uchida , y. asakura , and h. suzuki , nucl .eng . des . *241 * , 2398 ( 2011 ) .j. a. sawicki , j. nucl . mater . * 374 * , 248 ( 2008 ). j. a. sawicki , j. nucl . mater . * 402 * , 124 ( 2010 ) .m. p. short , d. hussey , b. k. kendrick , t. m. besmann , c. r. stanek , and s. yip , j. nucl .mater . * 443 * , 579 ( 2013 ) .rk , c. j. obrien , and d. w. brenner , j. nucl .mater . * 452 * , 446 ( 2014 ) . c. j. obrien , zs .rk , and d. w. brenner , j phys - condens mat * 25 * , 445008 ( 2013 ) . c. j. obrien , zs .rk , and d. w. brenner , j. phys .c * 118 * , 5414 ( 2014 ) .k. matsunaga , phys .b. * 77 * , 104106 ( 2008 ) . s. b. zhang and j. e. northrup , phys .lett . * 67 * , 2339 ( 1991 ) . s. b. zhang ,j phys - condens mat * 14 * , r881 ( 2002 ) .t. windman , t. windman , and e. shock , geochim .72 * , a1027 ( 2008 ) .j. w. johnson , e. h. oelkers , and h. c. helgeson , computat .* 18 * , 899 ( 1992 ) . j. c. tanger and h. c. helgeson , am* 288 * , 19 ( 1988 ) .r. o. jones and o. gunnarsson , rev .* 61 * , 689 ( 1989 ) .v. stevanovic , s. lany , x. w. zhang , and a. zunger , phys .b * 85 * , 115104 ( 2012 ) .s. lany , phys .b * 78 * , 245207 ( 2008 ) .a. jain , g. hautier , s. p. ong , c. j. moore , c. c. fischer , k. a. persson , and g. ceder , phys .b * 84 * , 045115 ( 2011 ) .m. todorova and j. neugebauer , phys .rev . applied * 1 * , 014001 ( 2014 ) .k. fujiwara and m. domae , in proceedings of 14th international conference on the properties of water and steam , edited by m. nakahara et al .( maruzen co. , ltd , kyoto , 2004 ) , p. 581 . a. v. bandura and s. n. lvov , j phys chem ref data * 35 * , 15 ( 2006 ) .a. e. paladino , j. am .. soc . * 42 * , 168 ( 1959 ) .h. m. obryan , f. r. monforte , and r. blair , j. am .. soc . * 48 * , 577 ( 1965 ) .a. t. nelson , j. t. white , d. a. andersson , j. a. aguiar , k. j. mcclellan , d. d. byler , m. p. short , and c. r. stanek , j. am .. soc . * 97 * , 1559 ( 2014 ) . | a serious concern in the safety and economy of a pressurized water nuclear reactor is related to the accumulation of boron inside the metal oxide ( mostly nife spinel ) deposits on the upper regions of the fuel rods . boron , being a potent neutron absorber , can alter the neutron flux causing anomalous shifts and fluctuations in the power output of the reactor core . this phenomenon reduces the operational flexibility of the plant and may force the down - rating of the reactor . in this work an innovative approach is used to combine first - principles calculations with thermodynamic data to evaluate the possibility of b incorporation into the crystal structure of nife , under conditions typical to operating nuclear pressurized water nuclear reactors . analyses of temperature and ph dependence of the defect formation energies indicate that b can accumulate in nife as an interstitial impurity and may therefore be a major contributor to the anomalous axial power shift observed in nuclear reactors . this computational approach is quite general and applicable to a large variety of solids in equilibrium with aqueous solutions . |
networks provide a powerful representation of interaction patterns in complex systems . the structure of social relations among individuals, interactions between proteins , food webs , and many other situations can be represented using networks . until recently , the vast majority of studies focused on networks that consist of a single type of entity , with different entities connected to each other via a single type of connection .such networks are now called _ single - layer _ ( or _ monolayer _ ) networks .the idea of incorporating additional information such as multiple types of interactions , subsystems , and time - dependence has long been pointed out in various fields , such as sociology , anthropology , and engineering , but an effective unified framework for the mathematical treatment of such multidimensional structures , which are usually called _ multilayer networks _ , was developed only recently .multilayer networks can be used to model many complex systems .for example , relationships between humans include different types of interactions such as relationships between family members , friends , and coworkers that constitute different _ layers _ of a social system .different layers of connectivity also arise naturally in natural and human - made systems in transportation , ecology , neuroscience , and numerous other areas .the potential of multilayer networks for representing complex systems more accurately than was previously possible has led to an explosion of work on the physics of multilayer networks .a key question concerns the implications of multilayer structures on the dynamics of complex systems , and several papers about interdependent networks a special type of multilayer network revealed that such structures can change the qualitative behaviors in a significant way .for example , several studies have provided insights on percolation properties and catastrophic cascades of failures in multilayer networks .these findings helped highlight an important challenge : how does one account for multiple layers of connectivity in a consistent mathematical way ?an explosion of recent papers has developed the field of multilayer networks into its modern form , and there is now a suitable mathematical framework , novel structural descriptors , and tools from fields ( such as statistical physics ) for studying these systems .many studies have also started to highlight the importance of analyzing multilayer networks , instead of relying on their monolayer counterparts , to gain new insights about empirical systems ( see , e.g. , ) .it has now been recognized that the study of multilayer networks is fundamental for enhancing understanding of dynamical processes on networked systems .an important example are spreading processes , such as flows ( and congestion ) in transportation networks , and information and disease spreading in social networks .for instance , when two spreading process are coupled in a multilayer network , the onset of one disease - spreading process can depend on the onset of the other one , and in some scenarios there is a curve of critical points in the phase diagram of the parameters that govern a system s spreading dynamics .such a curve reveals the existence of two distinct regimes , such that the criticality of the two dynamics is interdependent in one regime but not in the other .similarly , cooperative behavior can be enhanced by multilayer structures , providing a novel way for cooperation to survive in structured populations . for additional examples ,see various reviews and surveys on multilayer networks and specific topics within them .a multilayer framework allows a natural representation of coupled structures and coupled dynamical processes . in this article , after we give a brief overview on representing multilayer networks , we will focus on spreading processes in which multilayer analysis has revealed new physical behavior .specifically , we will discuss two cases : ( i ) a single dynamical process , such as continuous or discrete diffusion , running on top of a multilayer network ; and ( ii ) different dynamical processes , in which each one runs on top of a given layer , but they are coupled by a multilayer structure .one can represent a monolayer network mathematically by using an adjacency matrix , which encodes information about ( possibly directed and/or weighted ) relationships among the entities in a network . because multilayer networks include multiple dimensions of connectivity , called _ aspects _ , that have to be considered simultaneously, their structure is much richer than that of ordinary networks .possible aspects include different types of interactions or communication channels , different subsystems , different spatial locations , different points in time , and more .one can use tensors to encode the connectivity of multilayer networks as ( multi)linear - algebraic objects .multilayer networks include three types of edges : intra - layer edges ( connecting nodes within the same layer ) , inter - layer edges between replica nodes ( i.e. , copies of the same entity ) in different layers , and inter - layer edges between nodes that represent distinct entities .distinguishing disparate types of edges has deep consequences both mathematically and physically .mathematically , this yields banded structures in multilinear - algebraic objects that depend on a systems physical constraints , and such structures impact features such as a network s spectral properties .these , in turn , have a significant impact on dynamical systems ( e.g. , of spreading processes or coupled oscillators ) that are coupled through multilayer networks .moreover , intra - layer edges and inter - layer edges encode relationships in fundamentally different ways , and they thereby represent different types of physical functionality . for example , in a metropolitan transportation system , intra - layer edges account for connections between the same type of node ( e.g. , between two different subway stations ) , whereas inter - layer edges connect different types of nodes ( e.g. , between a certain subway station and an associated bus station ) . in some cases , inter - layer edges and intra - layer edges may even be measured using different physical units . for instance, an intra - layer edge in a multilayer social network could represent a friendship between two individuals on facebook , whereas an inter - layer edge in the same network could represent the transition probability of an individual switching from using facebook to use twitter .the rich variety of connections in a typical multilayer network can be mathematically represented by the components of a 4th - order tensor , called multilayer adjacency tensor , encoding the relationship between any node in layer and any node in layer in the system ( where and , denotes the number of nodes in the network and denotes the number of layers ) .once the connectivity of the nodes and layers are encoded in a tensor , one can define novel measures to characterize the multilayer structure .however , this is a delicate process , as naively generalizing existing concepts from monolayer networks can lead to qualitatively incorrect or nonsensical results .an alternative way of generalizing concepts from monolayer networks to multilayer networks is to use sets of adjacency matrices rather than tensors .this alternative approach has the advantage of familiarity , and indeed it is also convenient to `` flatten '' adjacency tensors into matrices ( called `` supra - adjacency matrices '' ) for computations .however , the compact representation of multilayer networks in terms of tensors allows greater abstraction , which has been very insightful , and it will facilitate further development of the mathematics of complex systems .studies of structural properties of multilayer networks include descriptors to identify the most `` central '' nodes according to various notions of importance and quantify triadic relations such as clustering and transitivity .significant advances have been achieved to reduce the structural complexity of multilayer networks , to unveil mesoscale structures ( e.g. , communities of densely - connected nodes ) , and to quantify intra - layer and inter - layer correlations in empirical networked systems .the structural properties of multilayer networks depend crucially on how layers are coupled together to form a multilayer structure .inter - layer edges provide the coupling and help encode structural and dynamical features of a system , and their presence ( or absence ) produces fascinating structural and dynamical effects . for example , in multimodal transportation systems , in which layers represent different transportation modes , the weight of inter - layer connections might encode an economic or temporal cost to switching between two modes . in multilayer social networks ,inter - layer connections allow models to tune , in a natural way , an individual s self - reinforcement in opinion dynamics .depending on the relative importances of intra - layer and inter - layer connections , a multilayer network can act either as a system of independent entities , in which layers are structurally decoupled , or as a single - layer system , in which layers are indiscernible in practice . in some multilayer networks , one can even derive a sharp transition between these two regimes .there are two different categories of dynamical processes on multilayer networks : ( i ) a single dynamical process on top of the coupled structure of a multilayer network ( see fig .[ fig : dynamics_types]a ) ; and ( ii ) `` mixed '' or `` coupled '' dynamics , in which two or more dynamical processes are defined on each layer separately and are coupled together by the presence of inter - layer connections between nodes ( see fig .[ fig : dynamics_types]b ) . * _ single dynamics .* in this section , we analyze physical phenomena that arise from a single dynamical process on top of a multilayer structure .the behavior of such a process depends both on intra - layer structure ( i.e. , the usual considerations in networks ) and on inter - layer structure ( i.e. , the presence and strength of interactions between nodes on different layers ) .one of the simplest types of dynamics is a diffusion process ( either continuous or discrete ) .the physics of diffusion , which has been analyzed thoroughly in multiplex networks reveals an intriguing and unexpected phenomenon : diffusion can be faster in a multiplex network than in any of the layers considered independently .one can understand diffusion in multiplex networks in terms of the spectral properties of a laplacian tensor ( in particular , we consider the type of laplacian that is known in graph theory as the `` combinatorial laplacian '' ) , obtained from the adjacency tensor of a multilayer network , that governs the diffusive dynamics .one first `` flattens'' without loss of information , provided one keeps the layer labels the laplacian tensor into a special lower - order tensor called `` supra - laplacian matrix '' .the supra - laplacian matrix has a block - diagonal structure , where diagonal blocks encode the associated laplacian matrices corresponding to each layer separately and off - diagonal blocks encode inter - layer connections .the supra - laplacian matrix was initially presented in the literature as a matrix for a multilayer network that includes both intra - layer edges and inter - layer edges .the time scale of diffusion is controlled by the smallest positive eigenvalue of the supra - laplacian matrix . in fig .[ fig : diffusion ] , we show a representative result that conveys the existence of two distinct regimes in multiplex networks as a function of the inter - layer coupling strength .the regimes illustrate how multilayer structure can influence the outcome of a physical process . for small values of the inter - layer coupling, the multilayer structure slows down the diffusion ; for large values , the diffusion speed converges to the mean diffusion speed of the superposition of layers . in many cases ,the diffusion in the superposition is faster than that in any of the separate layers .these findings are a direct consequence of the emergence of more paths between every pair of nodes due to the multilayer structure .the transition between the two regimes is a structural transition , a characteristic of multilayer networks that can also arise in other contexts .the above phenomenology can also occur in discrete processes .perhaps the most canonical examples of discrete dynamics are random walks , which are used to model markovian dynamics on monolayer networks and which have yielded numerous insights over the last several decades . in a random walk , a discretized form of diffusion, a walker jumps between nodes through available connections . in a multilayer network ,the available connections include layer switching via an inter - layer edge , a transition that has no counterpart in monolayer networks and which enriches random - walk dynamics .an important physical insight of the interplay between multilayer structure and the dynamics of random walkers is `` navigability'' , which we take to be the mean fraction of nodes that are visited by a random walker in a finite time , which ( similar to the case of continuous diffusion ) can be larger than the navigability of an aggregated network of layers . in terms of navigability ,multilayer networks are more resilient to uniformly random failures than their individual layers , and such resilience arises directly from the interplay between the multilayer structure and the dynamical process .another physical phenomenon that arises in multilayer networks is related to congestion , which arises from a balance between flow over network structures and the capacity of such structures to support flow .congestion in networks was analyzed many years ago in the physics literature , but it has been studied only recently in multilayer networks , which can be used to model multimodal transportation systems .it is now known that the multilayer structure of a multiplex network can induce congestion even when a system would remain decongested in each layer independently .coupled dynamical processes are a second archetypical family of dynamics in which multilayer structure plays a crucial role . thus far , the most thoroughly studied examples are coupled spreading processes , which are crucial for understanding phenomena such as the spreading dynamics of two concurrent diseases in two - layer multiplex networks and spread of disease coupled with the spread of information or behavior .we illustrate two basic effects : ( i ) two spreading processes can enhance each other ( e.g. , one disease facilitates infection by the other ) , and ( ii ) one process can inhibit the spread of the other ( e.g. , a disease can inhibit infection by another disease or the spreading of awareness about a disease can inhibit the spread of the disease ) .interacting spreading processes also exhibit other fascinating dynamics , and multilayer networks provide a natural means to explore them .the above phenomenology is characterized by the existence of a curve of critical points that separate endemic and non - endemic phases of a disease .this curve exhibits a crossover between two different regimes : ( i ) a regime in which the critical properties of one spreading process are independent of the other , and ( ii ) a regime in which the critical properties of one spreading process do depend on those of the other .the point at which this crossover occurs is called a `` metacritical '' point . in fig .[ fig : spreading ] , we show ( left ) a phase diagram of disease incidence in one layer of two reciprocally enhanced disease spreading processes ; and ( right ) a phase diagram of the incidence in one layer of an inhibitory disease spreading process affecting another disease .the metacritical point delineates the transition between independence ( dashed line ) and dependence ( solid curve ) of the critical properties of the two processes .in most natural and engineered systems , entities interact with each other in complicated patterns that include multiple types of relationships and/or multiple subsystems , change in time , and incorporate other complications .the theory of multilayer networks seeks to take such features into account to improve our understanding of such complex systems . in the last few years, there have been intense efforts to generalize traditional network theory by developing and validating a framework to study multilayer systems in a comprehensive fashion .the implications of multilayer network structure and dynamics are now being explored in fields as diverse as neuroscience , transportation , ecology , granular materials , evolutionary game theory , and many others .for instance , in ecological networks , different layers might encode different types of interaction e.g. trophic and non - trophic or different spatial patches ( or different temporal snapshots ) , where the same interaction may or may not appear . in human brain networks ,different layers might encode functional connectivity corresponding to specific frequency bands , with inter - layer connections encoding cross - frequency interactions . in gene interaction networks, layers might correspond to different genetic interactions ( e.g. , suppressive , additive , or based on physical or chemical associations) . in financial networks , layers might represent different interdependent networks of entities e.g. , banking networks and commercial firms or different trade relationships among legal entities , ranging from individuals to countries . despite considerable progress in the last few years , much remains to be done to obtain a deep understanding of the new physics of multilayer network structure and multilayer network dynamics ( both dynamics of and dynamics on such networks ) . in seeking such a deep understanding ,it is crucial to underscore the inextricable interdependence of the structure and dynamics of networks .recent efforts have revealed fundamental new physics in multilayer networks .the richer types of spreading and random - walk dynamics can lead to enhanced navigability , induced congestion , and the emergence of new critical properties .such new phenomena also have a major impact on practical goals such as coarse - graining networks to examine mesoscale features and evaluating the importance of nodes two goals that date to the beginning of investigations of networks . for multilayer networks to achieve their vast potential , there remain crucial problems to address . for example , from a structural point of view , it is much easier to measure edge weights reliably for intra - layer edges than for inter - layer edges .moreover , inter - layer edges not only play a different role from intra - layer ones , but they also play different roles in different applications , and the research community is only scratching the surface of the implications of their presence and the new phenomena to which they lead .for example , how to infer or impose inter - layer edges ( and their associated meaning ) is a major challenge in many applications of social networks , where an inter - layer connection could exploit the fact of changing from a social platform to another in time as the probability of switching. this can be even more complicated in many types of biological networks ( e.g. , when considering protein and genetic interactions ) .we know that different layers are not independent of each other , but it is much more difficult to quantify and measure the weights of the dependencies in a meaningful way .another major challenge is to understand the propagation of dynamical correlations , due to network structure , across different layers , which affects not only spreading processes but dynamical systems more generally .although our manuscript only addresses physical phenomena related to spreading processes , other dynamical processes also pose extremely fascinating questions .one important example is synchronization , although there are many others ( e.g. , opinion models , games , and more ) .a few studies with particular setups have made good progress on multilayer synchronization ( see , e.g. , ) , but there phenomenology is very rich , and it will require the development of solid theoretical grounding to study synchronization manifolds , stability analysis , transient dynamics , and more . additionally , one can build on diffusion dynamics to study reaction diffusion systems in multilayer networks .a particularly promising approach in network theory that will have a major impact on future studies of multilayer networks is the analysis of network structure that arises from latent geometrical spaces .observed connectivity in networks often depends on space either through explicit constraints or by influencing the existence probability and weights of edges and thus on the distance in that space .either or both of the latent space ( e.g. , people with connections on more layers can lead to a higher probability of observing an edge between them ) and an associated observed network connections can have a multilayer structure .such explicit use of geometry also allows the possibility of incorporating more continuum types of analyses to accompany the traditional discrete approaches to studying networks .we thus assert that techniques from both geometry and statistics will be crucial for scrutinizing dynamical processes on multilayer networks .the study of multilayer networks is in its infancy , and new emergent physical phenomena that arise from the interaction of such networks and the dynamical processes on top of them are waiting to be discovered .all of the authors wrote the paper and contributed equally to the production of the manuscript .the authors declare no competing financial interests .all authors were funded by fet - proactive project plexmath ( fp7-ict-2011 - 8 ; grant # 317614 ) funded by the european commission .mdd acknowledges financial support from the spanish program juan de la cierva ( ijci-2014 - 20225 ) .cg acknowledges financial support from a james s. mcdonnell foundation postdoctoral fellowship .aa acknowledges financial support from the icrea academia , the james s. mcdonnell foundation , and fis2015 - 38266 .map acknowledges a grant ( ep / j001759/1 ) from the epsrc .the authors acknowledge help from serafina agnello on the creative design of figures .* multilayer networks*. ( * a * ) an edge - colored multigraph , in which nodes can be connected by different types ( i.e. , colors ) of interactions . in this example, there are no inter - layer edges .( * b * ) a multiplex network , which consists of an edge - colored multigraph along with inter - layer edges that connect entities with their replicas on other layers .( * c * ) an interdependent network , in which each layer contains nodes of a different type ( circles , squares , and triangles ) and includes inter - layer edges to nodes in other layers ; in this case , inter - layer edges can occur either between entities and their replicas or between different entities . ] * dynamical processes on multilayer networks*. ( * a * ) schematic of a single type of dynamical process running on all layers of a multiplex network .( arcs of the same color represent the same dynamical process . )( * b * ) schematic of two dynamical processes , each of which is running on a different layer , that are coupled by the interconnected structure of a multilayer network ., scaledwidth=70.0% ] * single dynamics on a multilayer network .* the speed of diffusion dynamics in a multilayer network is characterized by the second smallest eigenvalue of a laplacian tensor .we consider a pair of coupled erds rnyi ( er ) networks in which we independently vary the probabilities ] . in the left panel ,we show the behavior of as a function of the inter - layer coupling weight .a sharp change in the value of the separates two different regimes that correspond to different structural properties of the multilayer network . ] * coupled dynamics on multilayer networks . * two ( left )reciprocally - enhanced and ( right ) reciprocally - inhibited disease - spreading processes of susceptible infected susceptible ( sis ) type .we compute these diagrams for multiplex networks formed by two layers of 5000-node erds rnyi graphs of 5000 with mean intra - layer degree .the colors in the figure represent the prevalence levels of the diseases at a steady state of monte - carlo simulations .note the emergence of a curve of critical points ( at a `` metacritical point '' ) in which the spreading in one layer depends on the spreading in the other . ] | the study of networks plays a crucial role in investigating the structure , dynamics , and function of a wide variety of complex systems in myriad disciplines . despite the success of traditional network analysis , standard networks provide a limited representation of complex systems , which often include different types of relationships ( i.e. , `` multiplexity '' ) among their constituent components and/or multiple interacting subsystems . such structural complexity has a significant effect on both dynamics and function . throwing away or aggregating available structural information can generate misleading results and be a major obstacle towards attempts to understand complex systems . the recent `` multilayer '' approach for modeling networked systems explicitly allows the incorporation of multiplexity and other features of realistic systems . on one hand , it allows one to couple different structural relationships by encoding them in a convenient mathematical object . on the other hand , it also allows one to couple different dynamical processes on top of such interconnected structures . the resulting framework plays a crucial role in helping achieve a thorough , accurate understanding of complex systems . the study of multilayer networks has also revealed new physical phenomena that remain hidden when using ordinary graphs , the traditional network representation . here we survey progress towards attaining a deeper understanding of spreading processes on multilayer networks , and we highlight some of the physical phenomena related to spreading processes that emerge from multilayer structure . departament denginyeria informtica i matemtiques , universitat rovira i virgili , 43007 tarragona , spain carolina center for interdisciplinary applied mathematics , department of mathematics , university of north carolina , chapel hill , nc 27599 - 3250 , usa oxford centre for industrial and applied mathematics , mathematical institute , university of oxford , ox2 6gg , uk ; cabdyn complexity centre , university of oxford , oxford ox1 1hp , uk ; and department of mathematics , university of california , los angeles , california 90095 , usa corresponding authors : alexandre.arenas.cat , manlio.dedomenico.cat |
in the last years , we have seen increased efforts of statistical physicists to tackle stochastic dynamical processes in networks in order to study various phenomena such as ordering processes , the spreading of epidemics and opinions , synchronization , collective behavior in social networks , stability under perturbations , or avalanche dynamics .a drastic simplification can be achieved when short cycles in the network , defined by interaction terms , are very rare .this is the case for locally tree - like graphs such as random regular graphs , erds - rny graphs , and gilbert graphs . for such random graphs with vertices , almost all cycles have length such that their effect is negligible in the thermodynamic limit .for static problems , this has been exploited in the so - called cavity method , where conditional nearest - neighbor probabilities are computed iteratively within the bethe - peierls approximation .the method was very successfully applied to study for example equilibrium properties of spin glasses , computationally hard satisfiability problems , and random matrix ensembles .this big success has motivated the generalization of the cavity method to dynamical problems , which is known as the dynamic cavity method or dynamic belief propagation .as the number of possible trajectories and , hence , the computational complexity increase exponentially in time , applications have however been quite restricted to either very short times , oriented graphs , or unidirectional dynamics with local absorbing states . in the latter case , one can exploit that vertex trajectories can be parametrized by a few switching times .another idea has been to neglect temporal correlations completely as in the one - step method or to retain only some correlations as in the 1-step markov ansatz . while this works well sometimes for stationary states at high temperatures , such approximations are usually quite severe for short to intermediate times or low temperatures . in this paper , we present an efficient novel algorithm for the solution of the parallel dynamic cavity equations for generic ( locally tree - like ) graphs and generic bidirectional dynamics . the central objects in the dynamic cavity method are conditional probabilities for vertex trajectories of nearest neighbors the so - called edge messages .as temporal correlations are decaying in time and/or time difference , we can approximate each edge message by a matrix product , i.e. , there is one matrix for every edge , edge state , and time step , encoding the temporal correlations in the corresponding part of the evolution .it turns out that the dimensions of these matrices do not have to be increased exponentially in time .one can obtain quasi - exact results with much smaller matrix dimensions .computation costs and precision can be tuned by controlling the dimensions in truncations .the idea of exploiting the decay of temporal correlations to approximate edge messages in matrix product form is in analogy with the use of matrix product states for the simulation of strongly correlated , mostly one - dimensional , quantum many - body systems .these have been used very successfully in algorithms like the density - matrix renormalization group to compute for example quantum ground - state properties , often with machine precision . besides lifting the restrictions of the aforementioned approaches , the matrix product edge - message ( mpem ) algorithm can also outperform monte carlo simulations ( mc ) of the dynamics in important respects . in particular , besides allowing for the simulation of single instances , alternatively , one can work directly in the thermodynamic limit .perhaps more importantly , it has a favorable error scaling .while statistical errors in mc decay very slowly with the number of samples as , mpem yields also observables with absolutely small expectation values with very good precision which is essential for the study of decay processes and temporal correlations .[ fig : msgsubgraph ] .,title="fig : " ] let denote the state of vertex at time step , and the state of the full system at time . given the state probabilities for time , we evolve to the next time step , , by applying the stochastic transition matrix . as vertex interacts only with its nearest neighbors , the probability for only depends on the states of these vertices at the previous time step such that the global transition matrix is a product of local transition matrices , here , and is the state of the nearest neighbors of vertex at time . in the cavity method , one neglects cycles of the ( locally tree - like ) graph according to the bethe - peierls approximation to reduce this computationally complex evolution to the dynamic cavity equation \\ \times \big[\prod_{k\in\partial i\setminus\{j\ } } \mu_{k\to i}({\bar{\sigma}}_k^{t}|{\bar{\sigma}}_i^{t-1})\big]\end{gathered}\ ] ] which only involves the so - called edge messages for the edges of a single vertex . for simplicity, we have assumed that vertices are uncorrelated in the initial state such that .the edge messages in the dynamic cavity equation are conditional probabilities for the trajectories and on edge .specifically , if we consider a tree graph and cut off everything `` right '' of vertex as indicated in figure [ fig : msgsubgraph ] by the dashed line , denotes the conditional probability of a trajectory on vertex , given the trajectory on vertex .. constructs out of the edge messages of the previous time step .this is exact for tree graphs and covers locally tree - like graphs in the bethe - peierls approximation .although we have gained a lot in the sense that the computational complexity is now linear in the system size , it is still exponential in time , if we were to encode the edge messages without any approximation .to circumvent this exponential increase of computation costs , we can exploit the decay of temporal correlations and approximate the exact edge message by a matrix product \\ \times a^{(t)}_{i\to j}(\sigma_i^{t-1})a^{(t+1)}_{i\to j}(\sigma_i^{t}).\end{gathered}\ ] ] the particular choice of assigning vertex variables and to the matrices occurring in the matrix product , is advantageous for the implementation of the recursion relation for mpems as will become clear in the following . in order for the matrix product to yield a scalar ,we set .[ fig : mpem ] the time - evolution starts at with . using the dynamic cavity equation , we iteratively build matrix product approximations for edge messages for time from those for time .it is simple to insert the matrix product ansatz for the edge messages in the dynamic cavity equation , but not trivial to bring the resulting edge message again into the canonical mpem form as required for the subsequent evolution step . the specific assignment of the vertex variables to matrices in eq .has been chosen such that all contractions ( products and sums over vertex variables ) occurring in the cavity equation are time - local in the sense that , given mpems in canonical form for all neighbors , the resulting can be written in ( non - canonical ) matrix product form as .\ ] ] as depicted in figure [ fig : mpem]b , the tensors for are obtained by contracting a local transition matrix with tensors from the time- mpems .this contraction entails a sum over the common indices , where is the vertex degree .assuming for the simplicity of notation that the matrix dimensions for all time- mpems are identical , the resulting matrices ] .in preparation for the next time step , we now need to bring the evolved edge message back into canonical form .furthermore , we need to introduce a controlled approximation that reduces the matrix dimensions because they would otherwise grow exponentially in time . both , a reordering of the vertex variables and in the matrix product and a controlled truncation of matrix dimensions can be achieved by sweeping through the matrix product and doing certain singular value decompositions ( svd ) of the tensors .let us shortly explain the notion of truncations at the example of a matrix product , where is an matrix and . in order to reduce in a controlled way , e.g. , the left matrix dimension of , we suggest to do an svd of the matrix product such that where we have grouped the variables to and . and are isometric matrices ; and .now , truncating some of the singular values , such that only the largest are retained , we obtain the controlled approximation note that this truncation scheme guarantees the minimum possible two - norm loss for the given new matrix dimension .while it seems very desirable to discard unimportant information and control the growth of computation cost through such truncations , the svd appears to be an insurmountable task . assuming that each variable can take different values and that , the cost for the svd would scale exponentially in time like .however , the beauty of matrix products is that such an svd can in fact be done sequentially with linear costs of order ( ) as follows .first , we do an exact transformation of the matrix product to bring it to the form , where tensors and obey the left and right orthonormality constraints respectively .this is achieved through a sequence of svds .it starts with the svd , where is a diagonal matrix of singular values , is isometric according to and obeys eq . .the sweep continues with the svd and so on until the computation of .analogously , we do a second sequence of svds starting from the right and ending with and , finally , define the central tensor as .after this somewhat laborious preparation , we can do the actual truncation , based on the svd with the same singular values as in eq . .with the matrix {kk'}:=\delta_{kk'}\lambda_k$ ] , the truncated matrix product takes the form [ fig : glauber ] with this tool in hand , we can now truncate the evolved edge message and bring it back into canonical form . in a first sweep from right ( ) to left ( ) , using svds , we can sequentially impose the right orthonormality constraints [ see eq . ] on the -tensors . in a subsequent sweep from left to right , again based on svds , we can now truncate the tensors to decrease bond dimensions from to something smaller .what is left , is to reorder the indices and of the vertex variables . in a sweep from right to left, we go from the variable assignment in the truncated and orthonormalized version of the mpem to the assignment in the matrix product d^{(t+1)}_{i\to j}(\sigma_i^{t+1}).\ ] ] at the right boundary , we start with an svd and controlled truncation , and continue with and so on until ending at . in an analogous final sweep from left to right , we change to the canonical variable assignment as in eq . .after executing these steps for all edge messages , the next evolution step from to can follow .the joint probability of trajectories and for the vertices of an edge is given by the product of the two corresponding edge messages .after marginalization , one obtains for example the probability for the edge state at time as in the mpem approach , this can be evaluated efficiently , as indicated in figure [ fig : mpem]c , by executing the contractions sequentially from left ( ) to right ( ) .similarly , one can for example also compute temporal correlators from probabilities .figure [ fig : glauber ] compares the simulation of glauber dynamics using our mpem algorithm to mc simulations and to the 1-step markov approximation .specifically , we have ising spins interacting ferromagnetically on random regular graphs , with local transition matrices . in the initial state ,all spins have magnetization , i.e. , .besides being applicable for single instances of finite graphs , the mpem approach gives also direct access to the thermodynamic limit .for disordered systems this can be done in a population dynamics scheme . the homogeneous case , considered here , is particularly simple as all edges of the graph are equivalent in the thermodynamic limit .hence , one can work with a single mpem .figure [ fig : glauber]a shows the evolution of the magnetization . in the ferromagnetic phase ( ) , it approaches a finite equilibrium value , whereas it decays to zero in the paramagnetic phase ( ) . as shown for , mc simulations contain finite size effects which become small for the system with sites .mc errors decrease slowly when increasing the number of samples as .this is problematic for observables with small absolute values where cancellation effects make it difficult to get a precise estimate .this is , e.g. , apparent in the magnetization decay for which , in contrast , is very precisely captured with mpem . in these simulations, we control the mpem precision by keeping only singular values above a threshold , specified by .decreasing , increases precision and computation costs .the 1-step markov approximation is not suited to handle temporal correlations . at long times it performs well for and fairly good for , but deviates rather strongly at earlier times .figure [ fig : glauber]a shows the connected temporal correlation function for as a function of for several times .its decay behavior can be difficult to impossible to capture with mc . in the example , mc deviations are often orders of magnitude above those of the numerically cheaper mpem simulations .the novel mpem algorithm , based on matrix product approximations of edge messages allows for an efficient and precise solution of the dynamic cavity equations .besides lifting restrictions of earlier approaches , mentioned in the introduction , it gives direct access to the thermodynamic limit , and its error scaling is favorable to that of mc simulations .we think that it is a very valuable tool , particularly , as it yields temporal correlations and other decaying observables with unprecedented precision and gives access to low - probability events .this opens a new door for the study of diverse dynamic processes and inference or dynamic optimization problems for physical , technological , biological , and social networks . | we describe and demonstrate an algorithm for the efficient simulation of generic stochastic dynamics of classical degrees of freedom defined on the vertices of a locally tree - like graph . networks with cycles are treated in the framework of the cavity method . such models correspond for example to spin - glass systems , boolean networks , neural networks , or other technological , biological , and social networks . building upon ideas from quantum many - body theory , the algorithm is based on a matrix product approximation of the so - called edge messages conditional probabilities of vertex variable trajectories . the matrix product edge messages ( mpem ) are constructed recursively . computation costs and precision can be tuned by controlling the matrix dimensions of the mpem in truncations . in contrast to monte carlo simulations , the approach has a better error scaling and works for both , single instances as well as the thermodynamic limit . as we demonstrate at the example of glauber dynamics , due to the absence of cancellation effects , observables with small expectation values can be evaluated reliably , allowing for the study of decay processes and temporal correlations . |
contemporary scientific investigations frequently encounter a common issue of exploring the relationship between a response and a number of covariates . in machine learning research , the subject is typically addressed through learning a underling rule from the data that accurately predicates future values of the response .for instance , in banking industry , financial analysts are interested in building a system that helps to judge the risk of a loan request .such a system is often trained based on the risk assessments from previous loan applications together with the empirical experiences . an incoming loan requestis then viewed as a new input , upon which the corresponding potential risk ( response ) is to be predicted . in such applications ,the predictive accuracy of a trained rule is of the key importance . in the past decade, various strategies have been developed to improve the prediction ( generalization ) capability of a learning process , which include regularization as an well - known example .the regularization learning prevents over - fitting by shrinking the model coefficients and thereby attains a higher predictive value . to be specific ,suppose that the data for are collected independently and identically according to an unknown but definite distribution , where is a response of unit and is the corresponding -dimensional covariates .let be a sample dependent space ( sdhs ) with and being a positive definite kernel function .the coefficient - based regularization strategy ( regularizer ) takes the form of where is a regularization parameter and is defined by with different choices of order , ( [ algorihtm ] ) leads to various specific forms of the regularizer .in particular , when , corresponds to the ridge regressor , which smoothly shrinks the coefficients toward zero .when , leads to the lasso , which set small coefficients exactly at zero and thereby also serves as a variable selection operator . when , coincides with the bridge estimator , which tends to produce highly sparse estimates through a non - continuous shrinkage .the varying forms and properties of make the choice of order crucial in applications . apparently , an optimal may depend on many factors such as the learning algorithms , the purposes of studies and so forth .these factors make a simple answer to this question infeasible in general . to facilitate the use of -regularization , alteratively, we intend to seek for a modeling strategy where an elaborative selection on is avoidable . specifically , we attempt to reveal some insights for the role of in -learning via answering the following question : * problem 1 . *are there any kernels such that the generalization capability of ( [ algorihtm ] ) is independent of ? in this paper , we provides a positive answer to problem 1 under the framework of statistical learning theory .specifically , we provide a featured class of positive definite kernels , under which the estimators for attain similar generalization error bounds .we then show that these estimated bounds are almost essential in the sense that up to a logarithmic factor the upper and lower bounds are asymptotically identical . in the proposed modeling context, the choice of does not have a strong impact in terms of the generalization capability . from this perspective, can be arbitrarily specified , or specified merely by other no generalization criteria like smoothness , computational complexity , sparsity , etc .. the reminder of the paper is organized as follows . in section 2 ,we provide a literature review and explain our motivation of the research .in section 3 , we present some preliminaries including spherical harmonics , gegenbauer polynomials and so on . in section 4, we introduce a class of well - localized needlet type kernels of petrushev and xu and show some crucial properties of them which will play important roles in our analysis . in section 5 , we then study the generalization capabilities of -regularizer associated with the constructed kernels for different . in section 6 , we provide the proof of the main results .we conclude the paper with some useful remarks in the last section .in practice , the choice of in ( [ algorihtm ] ) is critical , since it embodies certain potential attributions of the anticipated solutions such as sparsity , smoothness , computational complexity , memory requirement and generalization capability of course .the following simple simulation illustrates that different choice of can lead to different sparsity of the solutions . the samples are identically and independently drawn according to the uniform distribution from the two dimensional sinc function pulsing a gaussian noise with .there are totally 256 training samples and 256 test samples . in fig . 1, we show that different choice of may deduce different sparsity of the estimator for the kernel .it can be found that regularizers can deduce sparse estimator , while it impossible for regularizer .learning schemes , width=302,height=226 ] therefore , for a given learning task , how to choose is an important and crucial problem for regularization learning . in other words, which standards should be adopted to measure the quality of regularizers deserves study . as the most important standard of statistical learning theory , the generalization capability of regularization scheme ( [ algorihtm ] ) may depend on the choice of kernel , the size of samples , the regularization parameter , the behavior of priors , and , of course , the choice of . if we take the generalization capability of regularization learning as a function of , we thenautomatically wonder how this function behaves when changes for a fixed kernel .if the generalization capabilities depends heavily on , then it is natural to choose the such that the generalization capability of the corresponding regularizer is the smallest .if the generalization capabilities is independent of , then can be arbitrarily specified , or specified merely by other no generalization criteria like smoothness , computational complexity , sparsity .however , the relation between the generalization capability and depends heavily on the kernel selection . to show this, we compare the generalization capabilities of , , and regularization schemes for two kernels : and in the simulation .the one case shows that the generalization capabilities of regularization schemes may be independent of and the other case shows that the generalization capability of ( [ algorihtm ] ) depends heavily on . in the left of fig .2 , we report the relation between the test error and regularization parameter for the kernel . it is shown that when the regularization parameters are appropriately tuned , all of the aforementioned regularization schemes may possess the similar generalization capabilities . in the right of fig .2 , for the kernel , we see that the generalization capability of regularization depends heavily on the choice of .regularization schemes with different . ] regularization schemes with different . ] from these simulations , we see that finding kernels such that the generalization capability of ( [ algorihtm ] ) is independent of is of special importance in theoretical and practical applications . in particular , if such kernels exist , with such kernels , can be solely chosen on the basis of algorithmic and practical considerations for regularization . herewe emphasize that all these conclusions can , of course only be made in the premise that the obtained generalization capabilities of all regularizers are ( almost ) optimal .there have been several papers that focus on the generalization capability analysis of the regularization scheme ( [ algorihtm ] ) .wu and zhou were the first , to the best of our knowledge , to show a mathematical foundation of learning algorithms in sdhs .they claimed that the data dependent nature of the algorithm leads to an extra error term called hypothesis error , which is essentially different form regularization schemes with sample independent hypothesis spaces ( sihss ) .based on this , the authors proposed a coefficient - based regularization strategy and conducted a theoretical analysis of the strategy by dividing the generalization error into approximation error , sample error and hypothesis error .following their work , xiao and zhou derived a learning rate of regularizer via bounding the regularization error , sample error and hypothesis error , respectively .their result was improved in by adopting a concentration inequality technique with empirical covering numbers to tackle the sample error . on the other hand , for regularizers , tong et al . deduced an upper bound for generalization error by using a different method to cope with the hypothesis error .later , the learning rate of was improved further in by giving a sharper estimation of the sample error . in all those researches , some sharp restrictions on the probability distributions ( priors ) have been imposed , say , both spectrum assumption of the regression function and concentration property of the marginal distribution should be satisfied . noting this , for regularizer , sun andwu conducted a generalization capability analysis for regularizer by using the spectrum assumption to the regression function only .for regularizer , by using a sophisticated functional analysis method , zhang et al . and song et al . built the regularized least square algorithm on the reproducing kernel banach space ( rkbs ) , and they proved that the regularized least square algorithm in rkbs is equivalent to regularizer if the kernel satisfies some restricted conditions . following this method , song andzhang deduced a similar learning rate for the regularizer and eliminated the concentration property assumption on the marginal distribution .limiting within ] , and there holds where define then it is easy to see that is a complete orthonormal system for the weighted space , where .let be the unit ball in , be the unit sphere in and be the set of algebraic polynomials of degree not larger than defined on .denote by the aero element of . then the following important properties of established in .[ lem11 ] let be defined as above .then for each we have and where , and . for any integer ,the restriction to of a homogeneous harmonic polynomial with degree is called a spherical harmonic of degree .the class of all spherical harmonics with degree is denoted by , and the class of all spherical polynomials with total degrees is denoted by .it is obvious that .the dimension of is given by and that of is where denotes that there exist absolute constants and such that . the well known addition formula is given by ( see and ) where is arbitrary orthonormal basis of .for and , we say that a finite subset is an -covering of if where denotes the cardinality of the set and denotes the spherical cap with the center and the angle .the following positive cubature formula can be found in . [fixed cubature ] there exists a constant depending only on such that for any positive integer and any -covering of satisfying .there exists a set of numbers such that define then it follows from ( or ) that consists an orthonormal basis for , where of course , is an orthonormal basis for .the following lemma [ reproducing kernel ] defines a reproducing kernel of , whose proof will be presented in appendix a. [ reproducing kernel ] the space is a reproducing kernel hilbert space .the unique reproducing kernel of this space is this section , we construct a concrete positive definite needlet kernel and show its properties .a function is said to be admissible if , and satisfies the following condition : ,\eta(t)=1\ \mbox{on}\ [ 0,1],\ \mbox{and}\ 0\leq\eta(t)\leq 1\ \mbox{on}\ [ 1,2].\ ] ] such a function can be easily constructed out of an orthogonal wavelet mask .we define a kernel as the following as is admissible , the constructed kernel called the needlet kernel ( or localized polynomial kernel ) henceforth , is positive definite .we will show that so defined kernel function , deduces the regularization learning whose learning rate is independent of the choice of . to this end, we first show several useful properties of the needlet kernel .the following proposition [ prop1.0 ] which can be deduced directly from lemma [ reproducing kernel ] and the definition of reveals that possesses reproducing property for .[ prop1.0 ] let be defined as in ( [ best kernel ] ) . for arbitrary , there holds since is an admissible function by definition , it follows that is an algebraic polynomial of degree not larger than for any fixed . at the first glance , as a polynomial kernel, it may have good frequency localization property while have bad space localization property .the following proposition [ prop2 ] , which can be found in ( * ? ? ?* theorem 4.2 ) , however , advocates that is actually a polynomial kernel possessing very good spacial localized properties .this makes it widely applicable in approximation theory and signal processing .[ prop2 ] let be defined as in ( [ best kernel ] ) . for arbitrary , there exists a constant depending only on , and such that let be the best approximation error of .define it has been shown in ( * ? ? ? * remak 4.8 ) that the integral operator possesses the following compressive property : [ prop3.0 ] if is defined as in ( [ best operator ] ) , then , for arbitrary , there exists a constant depending only on and such that by propositions [ prop1.0 ] , [ prop2 ] and [ prop3.0 ] , a standard method in approximation theory yields the following best approximation property of .[ prop4 ] let and be defined in ( [ best operator ] ) , then for arbitrary , there exists a constant depending only on and such that this section , we conduct a detailed generalization capability analysis of the regularization scheme ( [ algorihtm ] ) when the kernel function is specified as .our aim is to derive an almost essential learning rate of regularization strategy ( [ algorihtm ] ) .we first present a quick review of learning theory .then , we given the main result of this paper , where a -independent learning rate of regularization schemes ( [ algorihtm ] ) is deduced .at last , we present some remarks on the main result .let be an input space and an output space .assume that there exists a unknown but definite relationship between and , which is modeled by a probability distribution on .it is assumed that admits the decomposition let be a set of finite random samples of size , , drawn identically , independently according to from .the set of examples is called a training set . without loss of generality, we assume that almost everywhere .the aim of learning is to learn from a training set a function such that is an effective estimate of when is given .one natural measurement of the error incurred by using of this purpose is the generalization error , which is minimized by the regression function defined by we do not know this ideal minimizer , since is unknown , but we have access to random examples from sampled according to .let be the hilbert space of square integrable functions on , with norm in the setting of , it is well known that , for every , there holds the goal of learning is then to construct a function that approximates , in the norm , using the finite sample .one of the main points of this paper is to formulate the learning problem in terms of probability estimates rather than expectation estimates . to this end , we present a formal way to measure the performance of learning schemes in probability .let and be the class of all borel measures on such that .for each , we enter into a competition over all estimators established in the hypothesis space , and we define the accuracy confidence function by furthermore , we define the accuracy confidence function for all possible estimators based on samples by from these definitions , it is obvious that for all .the sample dependent hypothesis space ( sdhs ) associated with is then defined by and the corresponding regularization scheme is defined by where the projection operator from the space of measurable functions to ] by assumption , it is easy to check that also , for arbitrary , we denote .we also need to introduce the class of priors . for any , denote by or the fourier transformation of , where .the inverse fourier transformation will be denoted by . in the space , the derivative of with order defined as where . here , fourier transformation and derivatives are all taken sense in distribution .let be any positive number .we consider the sobolev class of functions it follows from the well known sobolev embedding theorem that provided now , we state the main result of this paper , whose proof will be given in the next section .[ thm1 ] let with , , be any numbers , and .if is defined as in ( [ algorihtm1 ] ) with and , then there exist positive constants depending only on , , and , and satisfying such that for any , and for any , we explain theorem [ thm1 ] below in more detail . at first , we explain why the accuracy function is used to characterize the generalization capability of the regularization schemes ( [ algorihtm1 ] ) . in applications , we are often faced with the following problem : there are data available , and we are asked to product an estimator with tolerance at most by using these data only .in such circumstance , we have to know the probability of success .it is obvious that such probability depends on and .for example , if is too small , we can not construct an estimator within small tolerance .this fact is quantitatively verified by theorem [ thm1 ] .more specifically , ( [ negative ] ) shows that if there are data available and with , then regularization scheme ( [ algorihtm1 ] ) is impossible to yield an estimator with tolerance error smaller than .this is not a negative result , since we can see in ( [ negative ] ) also that the main reason of impossibility is the lack of data rather than inappropriateness of the learning scheme ( [ algorihtm1 ] ) .more importantly , theorem [ thm1 ] reveals a quantitive relation between the probability of success and the tolerance error based on samples .it says in ( [ theorem 1 ] ) that if the tolerance error is relaxed to or larger , then the probability of success of regularization is at least .the first inequality ( lower bound ) of ( [ theorem 1 ] ) implies that such confidence can not be improved further .that is , we have presented an optimal confidence estimation for regularization scheme ( [ algorihtm1 ] ) with .thus , theorem [ thm1 ] basically concludes the following thing : if , then every estimator deduced from samples by regularization can not approximate the regression function with tolerance smaller than , while if , then the regularization schemes with any can definitely yield the estimators that approximate the regression function with tolerance . the values and thus are critical for indicating the generalization error of a learning scheme .indeed , the upper bound of generalization error of a learning scheme depends heavily on , while the lower bound of generalization error is relative to .thus , in order to have a tight generalization error estimate of a learning scheme , we naturally wish to make the interval ] is almost the shortest one in the sense that up to a logarithmic factor , the upper bound and lower bound are asymptotical identical . noting that the learning rate established in theorem [ thm1 ] is independent of , we thus can conclude that the generalization capability of regularization does not depend on the choice of .this gives an affirmative answer to problem 1 .the other advantage of using the accuracy confidence function to measure the generalization capability is that it allows to expose some phenomenon that can not be founded if the classical expectation standard is utilized .for example , theorem [ thm1 ] shows a sharp phase transition phenomenon of regularization learning , that is , the behavior of the accuracy confidence function changes dramatically within the critical interval ] the interval of phase transition for a corresponding learning scheme . to make this more intuitive ,let us conduct a simulation on the phase transition of the confidence function below . without loss of generality , we implemented the regularization strategy ( [ algorihtm1 ] ) associated with the kernel ( [ best kernel ] ) for and to yield the estimator .the regularization parameter was chosen as .the training samples were drawn independently and identically according to the uniform distribution from the well known function , that is .the number of the training samples was chosen from to and the tolerance was chosen from to with step - length .then , there were totally 1000 test data drawn i. i. d according to the uniform distribution from .the test error was defined as we repeated 100 times simulations at each point , and labeled its value as if is smaller than the tolerance error and otherwise .simulation result is shown in fig.3 .we can see from fig.3 that in the upper right part , the colors of all points are red , which means that in those setting , the probability that is smaller than the tolerance is approximately .thus , if the number of samples is small , then regularization schemes can not provide an estimation with very small tolerance .in the lower left area , the colors of all points are blue , which means that the probability of smaller than the tolerance is approximately .between these two areas , there exists a band , that could be called the phase transition area , in which the colors of points vary from red to blue dramatically .it is seen that the length of phase transition interval monotonously decreases with .all these coincide with the theoretical assertions of theorem [ thm1 ] .regularization , width=415,height=302 ] for comparison , we also present a generalization error bound result in terms of expectation error .corollary [ thm2 ] below can be directly deduced from theorem [ thm1 ] and ( * ? ? ?* chapter 3 ) , if we notice the identity : [ thm2 ] let with , , , and .if is defined as in ( [ algorihtm1 ] ) with and , then there exist constants and depending only on , , and such that where is the set of all possible estimators based on samples . it is noted that the representation theorem in learning theory implies that the generalization capability of an optimal learning algorithm in sdhs is not worse than that of learning in rkhs with convex loss function .corollary [ thm2 ] then shows that if , then the generalization capability of an optimal learning scheme in sdhs associated with is not worse than that of any optimal learning algorithms in the corresponding rkhs .more specifically , ( [ generalization error ] ) shows that as far as the learning rate is concerned , all regularization schemes ( [ algorihtm1 ] ) for can realize the same almost optimal theoretical rate .that is to say , the choice of has no influence on the generalization capability of the learning schemes ( [ algorihtm1 ] ) .this also gives an affirmative answer to problem 1 in the sense of expectation .here , we emphasize that the independence of generalization of regularization on is based on the understanding of attaining the same almost optimal generalization error .thus , in application , can be arbitrarily specified , or specified merely by other no generalization criteria ( like complexity , sparsity , etc . ) .the methodology we adopted in the proof of theorem [ thm1 ] seems of novelty .traditionally , the generalization error of learning schemes in sdhs is divided into the approximation , hypothesis and sample errors ( three terms ) .all of the aforementioned results about coefficient regularization in sdhs falled into this style . according to , the hypothesis error has been regarded as the reflection of nature of data dependence of sdhs ( sample dependent hypothesis space ) , and an indispensable part attributed to an essential characteristic of learning algorithms in sdhs , compared with the learning in sihs ( sample independent hypothesis space ) . with the specific kernel function , we will divide the generalization error of regularization in this paper into the approximation and sample errors ( two terms ) only .both of these two terms are dependent of the samples .the success in this paper then reveals that for at least some kernels , the hypothesis error is negligible , or can be avoided in estimation when regularization learning are analyzed in sdhs .we show that such new methodology can bring an important benefit of yielding an almost optimal generalization error bound for a large types of priors .such benefit may reasonably be expected to beyond the regularization .we sketch the methodology to be used as follows . due tothe sample dependent property , any estimators constructed in sdhs may be a random approximant . to bound the approximation error, we first deduce a probabilistic cubature formula for algebraic polynomial .then we can discretize the near - best approximation operator based on the probabilistic cubature formula .thus , the well known jackson - type error estimate can be applied to derive the approximation error . to bound the sample error, we will use a different method from the tranditional approaches .since the constructed approximant in sdhs is a random approximant , the concentration inequality such as bernstein inequality can not be available . in our approach ,based on the prominent property of the constructed approximant , we will bound the sample error by using the concentration inequality established in twice . then the relation between the so - called pseudo - dimension and covering number yields the sample error estimate for regularization schemes ( [ algorihtm1 ] ) with arbitrary .hence , we divide the proof into four subsections . the first subsection is devoted to establish the probabilistic cubature formula .the second subsection is to construct the random approximant and study the approximation error .the third subsection is to deduce the sample error and the last subsectionis to derive the final learning rate .we present the details one by one below . in this subsection , we establish a probabilistic cubature formula . at first , we need several lemmas . the weighted norm on the -dimensional unit sphere is defined as follows .let and .define the following ( * ? ? ?* lemma 2.3 ) gives a weighted nikolskii inequality for spherical polynomial .[ weighted nikolskii ] let .then for any , where is a positive constant depending only on and .lemma [ relation ball sphere ] establishes a relation between cubature formula on the unit sphere and cubature formula on the unit ball , which can be found in ( * ? ? ?* theorem 4.2 ) .[ relation ball sphere ] if there is a cubature formula of degree on given by whose nodes are all located on , then there exists a cubature formula of degree on , that is , where are the first components of .the following lemma [ bernstein ] is known as the bernstein inequality for random variables , which can be found in .[ bernstein ] let be a random variable on a probability space with mean , variance .if for almost all . then , for all , we also need a lemma showing that if is a set of independent random variables drawn identically according to a distribution , then with high confidence the cubature formula holds .[ random cubature on sphere ] let , and . if are i.i.d .random variables drawn according to arbitrary distribution on , then there exits a set of real numbers such that holds with confidence at least subject to * proof .* for the sake of brevity , we write in the following . since the sampling set consists of a sequence of i.i.d .random variables on , the sampling points are a sequence of functions on some probability space . without loss of generality , we assume for arbitrary fixed . if we set , then we have where we have used the equality furthermore , it follows from lemma [ weighted nikolskii ] that hence on the other hand , we have then using lemma [ weighted nikolskii ] again , there holds thus it follows from lemma [ bernstein ] that with confidence at least there holds this means that if is a sequence of i.i.d .random variables , then the marcinkiewicz - zygmund inequality holds with probability at least then , almost same argument as that in ( * ? ? ?* theorem 4.1 ) or ( * ? ?* theorem 4.2 ) implies lemma [ random cubature on sphere ] . by virtue of the above lemmas , we can prove the following proposition [ probabilistic cubature ] .[ probabilistic cubature ] let and be a set of random variables independently and identically drawn according to arbitrary distribution .then there exits a set of real numbers and a constant depending only on such that the equality holds with confidence at least subject to to estimate the upper bound of we first introduce an error decomposition strategy .it follows from the definition of that , for arbitrary , since with , it follows from the sobolev embedding theorem that .thus , it can be deduced from proposition [ prop3.0 ] and proposition [ prop4 ] that there exists a such that }(f_\rho),\ ] ] where ] and , it follows from that that is , for holds with confidence at least . the lower bound can be more easily deduced .actually , it follows from ( * ? ? ?* equation ( 3.27 ) ) ( see also ) that for any estimator , there holds where and for some universal constant . with this , the proof of theorem [ thm1 ] is completed .in studies and applications , regularization is a fundamental skill to improve on performance of a learning machine .the regularization schemes ( [ algorihtm ] ) with are well known to be central in use . in this paper, we have studied the dependency problem of the generalization capability of regularization with the choice of . through formulating a new methodology of estimation of generalization error, we have shown that there is at least a positive definite kernel , say , , such that associated with such a kernel , the learning rate of the regularization schemes is independent of the choice of .( to be more precise , we verified that with the kernel , all regularization schemes ( [ algorihtm ] ) can attain the same almost optimal learning rate in the following sense : up to a logarithmic factor , the upper and lower bounds of generalization error of the regularization schemes are asymptotically identical ) .this implies that for some kernels , the generalization capability of regularization may not depend on .therefore , as far as the generalization capability is concerned , for those kernels , the choice of is not important , which then relaxes the model selection difficulty in applications .the problem is , however , far complicated .we have also illustrated in section 2 that there exists a kernel with which the generalization capability of regularization heavily depends on the choice of .thus , answering completely whether or not the choice of affects the generalization of regularization is by no means easy and completed . though we have constructed a concrete kernel example , the localized polynomial kernel , with which implementing the regularization in sdhs can realize the almost optimal learning rate , and this is independence of the choice of , we have not provided a practically feasible algorithm to implement the learning with the almost optimal generalization capability .this is because the kernel we have constructed is not easily computed in practice , even though we can use the cubature formula ( lemma [ fixed cubature ] ) to discretize it .thus , seeking the kernels that possesses the similar property as that of and can be implemented easily deserve study .this is under our current investigation .to prove lemma [ reproducing kernel ] , we need the following aronszajn theorem ( see ) . [ aronszajn ] let be a separable hilbert space of functions over with orthonormal basis . is a reproducing kernel hilbert space if and only if for all .the unique reproducing kernel is defined by * proof of lemma [ reproducing kernel ] . *since is an orthonormal basis for , for arbitrary , there exists a set of real numbers such that where the summation concerning the index is . on the other hand, it follows from ( [ basis ] ) that thus , the addition formula ( [ addition ] ) yields the above equality together with ( [ 3 ] ) and ( [ 4 ] ) implies therefore , there holds the above equality together with lemma [ aronszajn ] yields lemma [ reproducing kernel ] . | -regularization has been demonstrated to be an attractive technique in machine learning and statistical modeling . it attempts to improve the generalization ( prediction ) capability of a machine ( model ) through appropriately shrinking its coefficients . the shape of a estimator differs in varying choices of the regularization order . in particular , leads to the lasso estimate , while corresponds to the smooth ridge regression . this makes the order a potential tuning parameter in applications . to facilitate the use of -regularization , we intend to seek for a modeling strategy where an elaborative selection on is avoidable . in this spirit , we place our investigation within a general framework of -regularized kernel learning under a sample dependent hypothesis space ( sdhs ) . for a designated class of kernel functions , we show that all estimators for attain similar generalization error bounds . these estimated bounds are almost optimal in the sense that up to a logarithmic factor , the upper and lower bounds are asymptotically identical . this finding tentatively reveals that , in some modeling contexts , the choice of might not have a strong impact in terms of the generalization capability . from this perspective , can be arbitrarily specified , or specified merely by other no generalization criteria like smoothness , computational complexity , sparsity , etc .. * keywords : * learning theory , regularization learning , sample dependent hypothesis space , learning rate + * msc 2000 : * 68t05 , 62g07 . \1 . institute for information and system sciences , school of mathematics and statistics , xian jiaotong university xian 710049 , p r china \2 . the methodology center , the pennsylvania state university , department of statistics , 204 e. calder way , suite 400 , state college , pa 16801 , usa |
synchronization of interacting elements is an emergent phenomenon in complex systems .rhythms and synchronization of neuronal activities are essential for odor discrimination , visual feature integration , and brain computation , but perfect synchronization is not always desirable , but can sometimes be disastrous in mental disorders such as epilepsy . however ,considering the enormous number of connections between cells and organs in the body , the apparent independence or desynchronization between rhythms in different organs may be a bigger puzzle rather than their synchronization .indeed electrical engineers design desynchronization to implement time division multiple access ( tdma ) that prevents message collisions and provides asynchronous sleep cycles for nodes on wireless sensor networks .complex systems sometimes show partial synchronization .two types of partial synchronization have been recognized .the _ chimera state _ shows spatial separation of synchronized and desynchronized domains , whereas the _ periodic synchronization _ shows temporal alternation between synchronized and desynchronized states .unihemispheric sleep is an example of the chimera state where one half of the brain sleeps while the other half remains awake ; some animals adopt this strategy when predation risk is high .context - dependent control of synchronization is therefore necessary to ensure that complex systems function appropriately .in particular , to disrupt synchronization and achieve desynchronization , various methods have been proposed ; these include simply decreasing interaction strengths to stimulating with a short pulse , giving linear / nonlinear delayed feedback , and introducing inhibitory interactions .recent studies have investigated the synchronization - desynchronization transition in complex networks .these studies have revealed that in locally - coupled networks , this transition can be controlled by adapting the topology of networks . however , those controls require parameter optimization , because the transition between synchronized and desynchronized states is sharp . on top of the fine - tuning problem ,real systems generally have high complexities : ( i ) complex networks have heterogeneous couplings with excitatory and inhibitory links ; ( ii ) the strength of the couplings depends on the activities of nodes ; and ( iii ) the networks are hierarchically organized .therefore , precise control of synchronization looks implausible in complex systems . we ask how such complex systems realize precise and robust control of synchronization between interacting elements . in a hierarchical system ,generation of coherent behavior becomes increasingly difficult as the complexity of subsystems increases .this simple observation suggests a simple and effective way to control synchronization of a complex system by changing the _ complexity _ of its subsystems .here we formulate a minimal model to demonstrate this idea by using coupled stuart - landau oscillators .synchronization phenomena can be generally described by considering oscillators and their coupling . in particular , we adopt stuart - landau oscillators in which amplitude and phase are variable : first , we consider a _ subsystem _ that consists of three coupled oscillators with as a minimal set for generating diverse complexity .the dynamics of the complex variable can be understood by decomposing amplitude and phase parts : in the absence of coupling ( ) , the amplitude converges to a stable focus ( ) for , but the focus loses stability at the hopf bifurcation point ( ) , and a stable limit - cycle emerges with amplitude and frequency for .note that is another solution for .here we used only positive - definite ; when became negative in simulations , we made it positive with the following transformation , and . once the coupling is applied ( ) , the subsystem produces ample dynamics depending on the adjacency matrix with entries that determine the coupling sign from oscillator to oscillator . the phase part of eq .( [ phase ] ) is a generalized kuramoto model , in which oscillators have positive and negative couplings , and their coupling strengths depend on their amplitudes .the amplitude dependence has the physical meaning that when oscillator affects oscillator , the coupling strength is proportional to the affecter amplitude , but inversely proportional to the receiver amplitude ; i.e. , the pair of strong affecter and weak receiver exhibits a maximal coupling . and , and ( b ) .black points : attractors ; arrows : vector flows on the plane ( ) .( b ) the basins of the three attractors , , , and , are painted with red ( top left ) , blue ( bottom right ) , and green ( middle ) , respectively . ] when coupling is weak , the amplitudes can be approximated as , and the three phase equations of eq .( [ phase ] ) can be reduced to two equations of the phase differences , and . assuming identical intrinsic frequencies ( ) for simplicity , we obtain where .their steady states ( and ) are governed by the parameters and the adjacency matrix . hereself - couplings can be safely ignored ( ) , because their contribution is absent in eq .( [ phase ] ) . considering that the off - diagonal elements for can take either 1 or -1 for positive or negative couplings , the adjacency matrix has a total of possibilities . leaving index degeneracy aside , 16 cases remain ( fig .[ fig : motif ] ) .most of them drive to single attractors at steady states , regardless of . however , two anti - symmetric matrices , which correspond to networks 9 and 10 in fig . [ fig : motif ] , are exceptional in that they produce multiple attractors for similar ( ) .however , the two anti - symmetric matrices also generate single attractors for largely dissimilar .the effective coupling can be different from the topological coupling for largely dissimilar .therefore , we focus on networks 9 and 10 , which can alter their complexity ( number of attractors ) by controlling . in particular , network 9 has three populations of distinguishable oscillators : the first oscillator attracts the other two ; the second repels the other two ; and the third attracts one and repels one .in contrast , network 10 has three populations of indistinguishable oscillators .network 9 produces single attractors for dissimilar ( fig .[ fig : vectorflow]a ) , but three attractors for similar ( fig .[ fig : vectorflow]b ) . for ,the phase plane has three attractors ( ) , ( ) , and ( ) .we label them , , and , because and have small basins of attractors , whereas has a big basin ( fig .[ fig : vectorflow]b ) . when initial conditions [ , are given in the small basins , the subsystemis quickly attracted to or , but when initial conditions are given in the big basin , the subsystem approaches slowly ; near the center , the change in the radius of the cycle is not readily apparent . when the amplitude dynamics in eq .( [ amplitude ] ) is off ( ) , and the phase model in eq .( [ phase ] ) is only considered with frozen amplitudes , the subsystem is not attracted to the centers , but revolves endlessly around the centers at given initial radii . , , and are special positions in which the coupling terms , in eq .( [ amplitude ] ) , vanish and amplitudes become fixed .network 10 has the same phase plane as network 9 , with simple translations and ., which then affect every unit in reverse . , width=340 ] to demonstrate that the synchronization of a hierarchical system can depend on the complexity ( number of attractors ) of subsystems , we construct a hierarchical system composed of multiple units .each unit corresponds to one subsystem of the three coupled oscillators .the amplitude and phase of the oscillator in the unit are represented as .we consider an _inter - unit _ coupling in addition to the _ intra - unit _ coupling ( fig .[ fig : cell_diagram]a ) . in particular , we start by simply copying the intra - unit coupling into the inter - unit coupling : where is introduced to represent weaker inter - unit couplings compared with intra - unit couplings ( ) . by using an arithmetic average of for units , we can rearrange eq .( [ slmodel2a ] ) as where and .henceforth , we use the rescaled parameters as and .this equation can be interpreted as the mean fields affect every unit ( fig .[ fig : cell_diagram]b ) . to probe the inter - unit synchronization ,we define order parameters for three populations of oscillators : with .because three populations usually have the same degree of synchronization , hereafter , we represent them simply as , unless otherwise specified . for the anti - symmetric intra - unit couplings ,the arithmetic mean field leads to desynchronize units ( ) , because the inter - unit coupling in eq .( [ slmodel2 ] ) effectively generates repulsive interactions between units . for an example of two - unit systems ( ), indirectly interacts with through or .these three - step interactions always yield a negative loop as a net for the anti - symmetric matrices .the effective repulsion between units leads them to stay as far away as possible .this state has been referred as the _ splay state _ . therefore ,under the arithmetic mean field , the anti - symmetric intra - unit coupling can provide an effective scheme for the desynchronization of hierarchical systems . for the inter - unit coupling , we also consider a geometric average ( log - average ) of , ^{1/n}.\ ] ] the geometric average is frequently suitable in biological systems . unlike the arithmetic average in eq .( [ arithmetic_average ] ) , the geometric average decouples amplitude and phase averages : ^{1/n } , \\\bar{\theta}_\sigma & \equiv & \frac{1}{n } \sum_{n=1}^n \theta_{n\sigma}.\end{aligned}\ ] ] the average phase is given as an arithmetic average of bare phases , independent on amplitudes .after adopting the geometric mean field of eq .( [ geometric_average ] ) , we decompose eq .( [ slmodel2 ] ) into amplitude and phase parts : , \\\label{phase2 } \dot{\theta}_{n\sigma } & = & \omega_{n\sigma } + k \sum_{\sigma ' \neq \sigma } a_{\sigma \sigma ' } \bigg [ \frac{r_{n\sigma'}}{r_{n\sigma } } \sin(\theta_{n\sigma ' } - \theta_{n\sigma } ) + \epsilon \frac{\bar{r}_{\sigma'}}{r_{n\sigma } } \sin(\bar{\theta}_{\sigma ' } - \theta_{n\sigma } ) \bigg].\end{aligned}\ ] ] in this study , we adopt the geometric mean - field coupling , because it produces broader spectrum of synchronization between units .this completes our formulation of hierarchical oscillators . and ) and identical intrinsic frequencies ( ) for , the hierarchical system generates seven dynamic states : ( i ) state , in which two units are attracted into the same attractor or ; ( ii ) state , in which one unit rotates around or , and the other rotates around ; ( iii ) state , in which one unit rotates around , and the other unit jumps alternately between and .( iv ) state , in which two units slowly approach ; ( v ) ; ( vi ) ; ( vii ) states , in which two units slowly approach horizontally- , vertically- , and diagonally - shifted attractors from .different colors represent different pairs of two units.,width=453 ] x & & & & + & & & & + & & & & - + & & & & + & & - & & + & & & & + & & - & & + & & - & & + & & - & & +we investigate the synchronization of hierarchical oscillators governed by eqs .( [ amplitude2 ] ) and ( [ phase2 ] ) .we first consider a two - unit system ( ) , because it contains essential ingredients in the hierarchical system ; then , we extend the model into large systems . to examine the dynamics of two - unit systems ,we again focus on the phase differences ( and ) between three oscillators within each unit , and the phase difference between two units .the phase variables ( ) of two units are transformed to ( ) where , , , , and . herewe assume identical intrinsic frequencies ( ) for simplicity .first , when are widely dissimilar , individual units have single and identical attractors ( section [ section2 ] ) .thus the phase differences ( , ) and ( , ) for two units are attracted into the same attractor in the absence of inter - unit coupling .the attraction is not perturbed even with weak coupling ( ) between units .therefore , the inter - unit coupling contributes to fully synchronize the two units ( ) .second , when are similar ( e.g. , ) , individual units of network 9 have three attractors ( fig . [fig : vectorflow]b ) , and two units can be attracted to different attractors ( , , ) .this complexity impedes the synchronization between the two units . in the absence of inter - unit coupling ,the two units have four states if one considers the symmetry of and : ( i ) state , in which two units stay in the same small basin ; ( ii ) state , in which two units stay in different small basins .( iii ) state , in which one unit stays in the big basin , while the other unit stays in one of the two small basins ; and ( iv ) state , where two units stay in the big basin . under weak inter - unit coupling ( ), we numerically solved eqs .( [ amplitude2 ] ) and ( [ phase2 ] ) , and observed ample dynamics of the two - unit system ( fig .[ fig : phase_two ] ) : * state .two units arrive at the same attractor , either or , and are fully synchronized ( ) .* state .this state is unstable and excluded for .therefore , two units never sit on and .* state .two units rotate around and instead of being attracted into fixed points .* state .one unit shows a precessional cycle around , and the other unit jumps alternately between and .when two units sit on different basins ( and states ) , they show partial synchronizations ( ) .regarding the state , we found new stationary solutions in the two - unit system that satisfy with no contribution of coupling terms in eq .( [ amplitude2 ] ) : + \epsilon k \big [ \cos \big ( \bar{x } + \frac{z}{2 } \big ) + \cos \big ( \bar{y } + \frac{z}{2 } \big ) \big ] = 0 , \\ k \big [ \cos x_1 - \cos ( x_1 - y_1 ) \big ] + \epsilon k \big [ \cos \big ( x_1 - \frac{z}{2 } \big ) - \cos \big ( x_1 - \bar{y } - \frac{z}{2 } \big ) \big ] = 0 ,\\ k \big [ \cos y_1 + \cos ( x_1 - y_1 ) \big ] + \epsilon k \big [ \cos \big ( y_1 - \frac{z}{2 } \big ) + \cos \big ( \bar{x } - y_1 + \frac{z}{2 } \big ) \big ] = 0 ,\\ k \big [ \cos x_2 + \cos y_2 \big ] + \epsilon k \big [ \cos \big ( \bar{x } - \frac{z}{2 } \big ) + \cos \big ( \bar{y } - \frac{z}{2 } \big ) \big ] = 0 , \\k \big [ \cos x_2 - \cos ( x_2 - y_2 ) \big ] + \epsilon k \big [ \cos \big ( x_2 + \frac{z}{2 } \big ) - \cos \big ( x_2 - \bar{y } + \frac{z}{2 } \big ) \big ] = 0 , \\ k \big [ \cos y_2 + \cos ( x_2 - y_2 ) \big ] + \epsilon k \big [ \cos \big ( y_2 + \frac{z}{2 } \big ) + \cos \big ( \bar{x } - y_2 -\frac{z}{2 } \big ) \big ] = 0\end{aligned}\ ] ] with and . under these constraints , eq .( [ phase2 ] ) yields five phase difference equations : \nonumber \\ & & + \epsilon k \big [ \sin \big(\bar{x } + \frac{z}{2 } \big ) + \sin \big(\bar{y } + \frac{z}{2 } \big ) - \sin \big(x_1-\frac{z}{2 } \big ) + \sin \big(x_1 - \bar{y } - \frac{z}{2 } \big ) \big ] , \\ { \dot{y}_1 }& = & k \big [ \sin x_1 + \sin(x_1 - y_1 ) \big ] \nonumber \\ & & + \epsilon k \big[\sin \big(\bar{x } + \frac{z}{2 } \big ) + \sin \big(\bar{y } + \frac{z}{2 } \big ) - \sin \big(y_1-\frac{z}{2 } \big ) + \sin \big ( \bar{x } - y_1 + \frac{z}{2}\big ) \big ] , \\ { \dot{x}_2 } & = & k\big [ \sin y_2 + \sin(x_2 - y_2 ) \big ] \nonumber \\ & & + \epsilon k \big [ \sin \big ( \bar{x } - \frac{z}{2 } \big ) + \sin \big ( \bar{y } - \frac{z}{2 } \big ) - \sin \big(x_2+\frac{z}{2 } \big ) + \sin \big(x_2 - \bar{y } + \frac{z}{2 } \big ) \big ] , \\ { \dot{y}_2 } & = & k \big [ \sin x_2 + \sin(x_2 - y_2 ) \big ] \nonumber \\ & & + \epsilon k \big [ \sin \big ( \bar{x } - \frac{z}{2 } \big ) + \sin \big ( \bar{y } - \frac{z}{2 } \big ) - \sin \big(y_2+\frac{z}{2 } \big ) + \sin \big(\bar{x } - y_2 - \frac{z}{2 } \big ) \big ] , \\ \label{eq : z } { \dot{z } } & = & k \big [ \sin x_1 + \sin y_1 - \sin x_2 - \sin y_2 \big ] \nonumber \\ & & + \epsilon k\big [ \sin \big(\bar{x}+\frac{z}{2}\big ) -\sin \big(\bar{x } -\frac{z}{2}\big ) + \sin \big(\bar{y}+\frac{z}{2 } \big)- \sin \big(\bar{y}-\frac{z}{2 } \big ) \big].\end{aligned}\ ] ] we examine the stationary conditions , .it is interesting that is automatically satisfied , once with constraints , and .the condition gives four states : * state ( and ) .two units arrive at ( and ) with arbitrary phase difference between two units .any values can satisfy in eq .( [ eq : z ] ) ; i.e. , two units are coupled , but still behave independently .* state ( and ) .two units ( , ) and ( , ) are horizontally located from their center ( , ) ; , and and satisfy * state ( and ) .two units ( , ) and ( , ) are vertically located from their center ( , ) ; , and and satisfy * state ( and ) .two units ( , ) and ( , ) are diagonally located from their center ( , ) ; , and and satisfy the complex eqs .( [ complexeq1])-([complexeq3 ] ) give exact values of the phase differences ( ) and ( ) between three oscillators within each unit , and the phase difference between two units . however , if is too small , they do not have solutions .therefore , , , and states can only emerge under sufficient inter - unit coupling . for ,the three states show partial synchronization ( ) .the realization of these seven states depends on initial conditions , [ . however , the initial - condition dependence is chaotic in the two - unit system .although we can not predict final states given initial conditions , the realization probability of each state is not uncertain ( table [ tab : table1 ] ) .the probability of state , which generates full synchronization ( ) , is relatively small because the probability is small that two units will enter the same small basin .therefore , as the number of units considered increases , the likelihood that each unit sits on a different attractor also increases , so the units in a hierarchical system may become partially synchronized .we performed the same analysis for network 10 , and confirmed that the two - unit system of network 10 also generates seven states like network 9 .their realization probabilities are also the same except for the switch between and states : ()=( ) and ()=( ) . .given network 9 ( red ) and network 10 ( blue ) , single attractors are generated for dissimilar ( and ) , while three attractors are generated for similar ( ) .the dissimilar is changed to the similar at , and back to the dissimilar at . for the simulation , coupling parameterswere set to and , and uniformly - distributed intrinsic frequencies were used as ] ., width=604 ] we now consider a hierarchical system that is composed of a large number of subsystems ( ) . when the system is sufficiently large ( ) , it produces homogeneous synchronization patterns , independent of initial conditions , unlike the two - unit system .however , like the two - unit system , we could control the synchronization of the large system simply by changing the complexity ( number of attractors ) of subsystems by adjusting ( fig .[ fig : control ] ) . for largely dissimilar ,individual units have single attractors , and thus become easily synchronized ( ) .in contrast , similarity of increases , it becomes more probable that they will enter basins of different attractors .this circumstance leads to partial synchronization or desynchronization of units ( ) .network 9 was slightly more effective for synchronization , but slightly less effective for desynchronization than network 10 .this result is similar to the observation that in the two - unit system , network 10 had a higher probability than network 9 of generating the bb state , in which individual units behave independently despite the inter - unit interaction .to check the robustness of controllability , we examined how parameters affect the degree of synchronization .we obtained similar degrees of synchronization over large ranges of ( fig . [fig : parameter ] ) .this demonstrates robust control of synchronization based on changes of the number of attractors , not based on changes of the attractor positions .we introduced a simple and effective way to control synchronization between elements in complex systems , in which hierarchically - organized heterogeneous elements have asymmetric and activity - dependent couplings .we formulated this idea by using stuart - landau oscillators .each subsystem consisted of three oscillators that interacted to each other positively or negatively .anti - symmetric couplings between three oscillators were special to generate single and multiple dynamic attractors depending on the amplitudes of the oscillators . herethe number of attractors could specify the complexity of subsystems .when the subsystems were connected through a mean field of their average activities , the degree of synchronization between subsystems was controllable by changing the complexity of subsystems .in particular , we considered two kinds of mean fields using arithmetic average and geometric average , and found that the geometric mean field that counts large outliers less important was effective for controlling synchronization . the controllable synchronization can be applied to understand the synchronization behavior of complex biological networks .interestingly , anti - symmetric couplings of three populations ( network 9 ) are realized in pancreatic islets , and the synchronization between pancreatic islets is an important requirement in glucose metabolism .our finding may help to understand how islet - cell networks are synchronized .this research was supported by basic science research funded by ministry of science , ict & future planning no .2013r1a1a1006655 and by the max planck society , the korea ministry of education , science and technology , gyeongsangbuk - do and pohang city . | the controllability of synchronization is an intriguing question in complex systems , in which hiearchically - organized heterogeneous elements have asymmetric and activity - dependent couplings . in this study , we introduce a simple and effective way to control synchronization in such a complex system by changing the complexity of subsystems . we consider three stuart - landau oscillators as a minimal subsystem for generating various complexity , and hiearchically connect the subsystems through a mean field of their activities . depending on the coupling signs between three oscillators , subsystems can generate ample dynamics , in which the number of attractors specify their complexity . the degree of synchronization between subsystems is then controllable by changing the complexity of subsystems . this controllable synchronization can be applied to understand the synchronization behavior of complex biological networks . _ keywords _ : synchronization , stuart - landau model , controllability , complexity |
the present article proposes a theory that predicts the formation of habit plane normals very close to , observed in steels with low carbon content ( less than ) .in fact , _ all _ the predicted habit plane normals are almost exactly with and + , where denotes the euclidean distance . ] . widely accepted models that result in habit planes are double shear theories , e.g. , and some of the most accurate explanations are due to the algorithm developed by kelly .these can be seen as generalisations of the so - called phenomenological theory of martensite most notably developed by wechsler , liebermann & read to explain the habit planes in plate martensite and bowles & mackenzie who applied their theory to explain the and habit planes also in plate martensite .a short - coming of single / double shear theories is the lack of a selection mechanism that picks the right lattice invariant shearing systems ( see e.g. ( * ? ? ?* table 1 ) ) , in turn leading to a large number of input parameters . to overcome this ,one approach is to only allow shearing systems that arise from mechanical twinning , cf . . indeed , in the context of single shear theories and made this assumption and proposed that the martensite plates consist of a _`` stack of twin - related laths''_. as pointed out in , tem investigations in showed that even though _ `` under the optical microscope there is little sign that this is the case , when such steels are examined with the transmission electron microscope , arrays of very thin twins are indeed found '' _ - marking a significant success from a theoretical prediction to an observed feature in plate martensite .regarding habit planes in lath martensite , it can be shown that for any reasonable choice of lattice parameters ( see also figure [ figsimplnormals ] ) , a single shear theory with shearing systems arising from twinning in bcc crystals can not give rise to them .however , in this paper we show that by introducing another level of twinning ( `` twins within twins '' ) we are not only able to explain habit planes but also predict them by showing that it is the only possible family of habit planes that satisfies a condition of maximal compatibility and a condition of small overall atomic movement . under this interpretationeach lath may be seen as a region of twins within twins . in other materials , twins within twinshave commonly been observed purely in martensite as well as along interfaces with austenite .moreover for lath martensite it has been observed in that _ `` twinning within a lath may be heavy . in any event , whenever an exact twin relationship was identified , it was found to be a result of twinning within a given lath and not of a twin relation between adjacent laths .it is believed that the existence of heavily twinned local regions of laths , which may appear as separate laths in contrast images , may have caused some misinterpretation in earlier work on lath martensite . '' _ as in single shear theories , twinning of twins is macroscopically equivalent to a simple shear of a simply twinned system ( cf .text below ) and thus the step from twins within twins is in analogy with the step from a single to a double shear theory .the strength of the theory presented here is that it enables one to predict habit planes only assuming the lattice parameters of austenite and martensite .this is particularly striking when compared to the double shear theory in where the _ `` calculation strategy was to select one of the possible systems and then perform calculations for the systems over a range of values of .the sign of was selected by trial and error depending on whether the habit plane moved towards or away from . '' _the theory proposed in this article is derived from the ball - james model - based on nonlinear elasticity and energy minimisation - and expands on previous work by ball & carstensen on the possibility of nonclassical austenite - martensite interfaces .even though lath martensite is commonly associated with a high dislocation density , slip and plasticity the present model does not take such effects into account ( see also section [ secconcl ] ) .interestingly , the ball - james model recovers the results of the phenomenological theory of martensite , as can be seen through a comparison of the derived formulae for the habit planes between twinned martensite and austenite ( cf .* eq . ( 5.89 ) ) and ( * ? ? ?* eq . ( 33)-(34 ) ) ) .for a self - contained account of the phenomenological theory or of the ball - james model the reader is referred to the monographs and , respectively , both addressed to non - specialists . in the ball - james model , which neglects interfacial energy ,microstructures are identified through minimising sequences , , for a total free energy of the form here , is a region representing the reference configuration of undistorted austenite at the transformation temperature and denotes the deformed position of particle in .we remark that passing to the limit in these minimising sequences , corresponds in a very precise way to passing from a micro- to a macroscale , so that the limits themselves can be identified with the macroscopic deformations .the energy density depends only on the deformation gradient , a 3 matrix with positive determinant , and the temperature .also is assumed frame indifferent , i.e. for all rotations - that is , for all 3 matrices in and must respect the symmetry of the austenite , i.e. for all rotations leaving the austenite lattice invariant .for cubic austenite there are precisely 24 such rotations . below the transformation temperature , is minimised on the set of martensitic energy wells , that is is minimal for .the , positive - definite , symmetric matrices are the pure stretch components of the transformation strains mapping the parent to the product lattice . for example , in the case of fcc to bcc or fcc to bct , these are given by the three bain strains where and .here is the lattice parameter of the fcc austenite and , are the lattice parameters of the bct martensite ( for bcc ) .the notation , , has been chosen to emphasise that we are in the bain setting and to stay consistent with the literature .we remark that the bain transformation is widely accepted as the transformation from fcc to bct / bcc requiring least atomic movement ; for a rigorous justification see . a convenient way to understand the relation between microstructures and minimising sequences is illustrated by the following example ( cf . ) .( austenite - twinned martensite interface)[example1 ] + suppose that a region of martensite is occupied by an array of twin related variants and with relative volume fractions and .the strains and in general can not be invariant plane strains ( ips ) , equivalently , they can not form a fully coherent interface with austenite , represented in this model by the identity matrix . however , for specific volume fractions ( given by for the bain strain ) , the average deformation strain of the twinned region may indeed become an ips . in terms of the nonlinear elasticity model , this inability to form a fully coherent interface at the microscopic level , implies that the austenite - twinned martensite configuration can not exactly minimise the energy .nevertheless , one can construct an energy minimising sequence , , with gradients as in figure [ fig : laminate ] .the limit of this sequence is precisely the average strain / total shape deformation that is an ips .although this average strain does not minimise the energy , it can be interpreted as a minimiser of a corresponding macroscopic energy . at the microscopic level, one would observe some specific element of the above minimising sequence rather than the limit , due to having neglected interfacial energy .the set of all matrices that can arise as limits of minimising sequences for the energy is referred to as the quasiconvex hull of the martensitic energy wells , denoted by .hence , the set corresponds to all possible homogeneous total shape deformations that are energy minimising at the macroscopic level .then , the requirement that ( up to an overall rotation ) a martensitic microstructure with a total shape deformation is a strain leaving the plane with normal invariant , amounts to finding a vector such that here , denotes the 3 matrix . writing , yields the equivalent expression , implying that is the ips given by a shear on the plane with normal , with shearing direction and shearing magnitude . in particular , if the transformation from parent to product phase is volume - preserving, is a simple shear , corresponding to the vectors and being perpendicular .we note that equivalently says that can form a fully coherent planar interface with austenite of normal . also by frame - indifference, the austenite can be represented by any rotation .hence , a further rotation of the martensite results in , so that the rotated martensite can still form a fully coherent interface with austenite of the same normal . that is , rotations from the left can not change the habit plane normal .a common feature in both the phenomenological theory and the ball - james model is to construct ( up to an overall rotation ) a total shape deformation that is an ips . in the literature ,various algorithms have been proposed for the calculation of the corresponding elements of the shear , i.e. the magnitude , direction and normal ( cf . ) .for example , see in the context of twinning / single shear theories and for double shear theories . in general , the problem of finding an overall rotation and shearing elements such that can be simplified by only considering the cauchy - green strain tensor and thus factoring out the overall rotation .the following proposition ( ( * ? ? ?* proposition 4 ) ) , allows one to calculate the shearing elements and in terms of the principal stretches and stretch vectors of .the overall rotation can then be found by substituting and back into equation .[ propbj ] let be a symmetric matrix with ordered eigenvalues . then can be written as for some if and only if and .then , there are at most two solutions given by where is a normalisation constant , and are the ( normalised ) eigenvectors of corresponding to and . in the material science literature twinsare often described as two phases related by a specific degree rotation or , equivalently , a reflection . in the mathematical literature a twin is usually characterised by the existence of a rank - one connection between the two deformation strains , corresponding to the two phases , i.e. the existence of vectors and such that a fully coherent interface between the two phases is then given by the plane of normal .this is because for any vector on that plane , i.e. , we obtain .also , note that so that the lattice on the one side of the interface can be obtained by shearing the lattice on the other side along the twin plane , in the shearing direction with shearing magnitude .the latter expression enables one to calculate the vectors and by proposition [ propbj ] through the identification , that is the relative deformation between the two phases is an ips . in view of single shear theories, the above expression can equivalently be written as and thus can be obtained as a shear of the parent lattice , followed by .hence , in the case of twins between two martensitic energy wells and one needs to solve the equation for the rotation matrix and the twinning elements and .if the transformation strains and are related by a rotation , this calculation simplifies significantly by mallard s law ( see ( * ? ? ?* result 5.2 ) or below ) .in particular , this assumption holds for and , i.e. for the bain transformation from fcc to bct / bcc .( mallard s law)[lemmamal ] + let and satisfy for some rotation about a unit vector , i.e. . then the equation admits two solutions given by in each case , .we conclude this interlude by remarking that twins described by the first solution in mallard s law are _ type i _ twins and the corresponding lattices are related by a 180 rotation about the twin plane .the second solution in mallard s law describes _ type ii _twins and the lattices are related by a 180 rotation about the shearing direction .it may happen , and it does for the bain strains , that there are two rotations by 180 relating and . in this case, there are seemingly four solutions from mallard s law , however , proposition [ propbj ] says that there can not be more than two .indeed , the type i solution using one 180 rotation is the same as the type ii solution using the other 180 rotation and vice versa .in particular , the lattices on either side of the interface are related by both a 180 rotation about the twin plane and a 180 rotation about the shearing direction .solutions of this type are _ compound _ twins .henceforth , we only consider the bain strains , and for the fcc to bct / bcc transformation in steel .in single shear theories the total shape deformation is assumed to be decomposable into where is a rotation , is one of the bain strains and is a shear whose specific form varies in the literature . in the ball - james theory ,the total shape deformation must be macroscopically energy minimising , thus restricting the form of the shear .the most important case is when arises from twinning . as in example[ example1 ] , the average strain corresponding to a twinning system between and , satisfying , with volume fractions and , respectively , is given by , for any , where the elements , and can be calculated by mallard s law ( cf .proposition [ lemmamal ] ) applied to with either or , i.e. the resulting twins are compound . by simple algebraic manipulation, the average strain can be written as i.e. a single shear , with , of the parent lattice followed by the bain strain . by proposition [ propbj ] , to make the total shape deformation an ips , the volume fraction needs to be chosen such that the middle eigenvalue of is equal to one .in particular , since one of the eigenvalues must be made equal to one , the expression must vanish , giving rise to the two solutions and , where it is important to then check that it is indeed the middle eigenvalue that is equal to one . for each of the values , ,we can calculate two habit plane normals according to proposition [ propbj ] .one of these normals is , up to normalisation , given by , where by considering the remaining normals and all possible pairs of twin related bain strains , one recovers the entire family of normals . in figure [ figsimplnormals ] , the components of one of these normals are plotted for a typical range of lattice parameters , in the fcc to bct / bcc transformation .we immediately note that for and the predicted habit plane normal arising from simple twinning is almost exactly and for and the habit plane normal is almost exactly . the corresponding ratios of tetragonality are given by and respectively .this in excellent agreement with the observations in e.g. of habit planes in steel with carbon content in the range wt- as well as the theoretical and experimental results in e.g. , and of habit planes in highly tetragonal martensite . similarly , in double shear theories the total shape deformation is assumed to be decomposable into where is a rotation , is one of the bain strains and , are two shears . abovewe have seen how twinning can be regarded as an instance of a single shear theory . in this sectionwe show how an additional level of twinning , i.e. twins within twins , results in an instance of a double shear theory , consistent with the ball - james model .to visualise this type of microstructure , we revisit the construction in example [ example1 ] to construct _ two _ twinning systems one with , and average strain and another one with say , and average strain . for each thereexists a rotation such that and thus the two twinning systems are macroscopically compatible with a fully coherent interface of normal between them ( cf .figure [ figdouble ] ) .the elements , and can be calculated by mallard s law ( cf .proposition [ lemmamal ] ) applied to with , giving rise to a type i and a type ii solution . unlike in the single shear theory ,these twins are not compound and therefore we distinguish these two solutions by the superscript , for type i and type ii respectively . finally , it can be shown that the volume fractions in each of the twinned regions must necessarily coincide . as in example[ example1 ] , we can construct an array of twins between the two twinned regions with respective volume fractions and and average strain given by simple algebraic manipulation allows one to write as where and and thus an instance of a double shear theory .we note that . by proposition [ propbj ] , in order to make the total shape deformation an ips , the volume fractions , need to be chosen such that the middle eigenvalue of is equal to one . solving for each fixed ,gives rise to a quadratic equation in for each choice of , which can be solved explicitly ( see for the full details ) .the expressions are lengthy and we refer to figure [ figdoublelam ] which visualises the dependence for the volume - preserving bain strain .the endpoints of these curves correspond to the vanishing of one of the twinning systems ( cf . ) and hence to the collapse of the system of twins within twins , to a simple twinning system with volume fractions given by .the figure remains qualitatively the same for typical lattice parameters .of , of and of that make the twins within twins an ips for and . ] for each and each admissible pair with corresponding strain , we can calculate two one - parameter families of habit plane normals through proposition [ propbj ] . by considering all possible combinations of twinning systems in our construction ,we then obtain all crystallographically equivalent normals .for the volume - preserving bain strain , the habit plane normals that can arise from the family are visualised in figure [ figfirstsolline ] .however , due to algebraic complexity , it is difficult to write down a formula for the habit plane normals with an explicit dependence on , and .it is natural to assume that the observed total shape deformation requires small overall atomic movement ( see also ) relative to the parent phase of austenite .a measure of this distance is the _ _ strain energy _ _ which yields the same results . ] given by where denotes the frobenius norm and the principal stretches of .it can be shown that any microstructure with small strain energy must necessarily involve all three bain variants in roughly similar volume fractions .in particular , this can not be the case for an array of twin related variants and we ought to consider at least twins within twins ( see also fig .[ figdis ] ) .although , introducing even further levels of twinning can reduce the strain energy , one could argue that interfacial energy contributions , which are not accounted for in this model , may inhibit such behaviour .combining the fact that can not result from simple twinning ( cf .[ figsimplnormals ] ) and that twins within twins are preferable in terms of strain energy , we build a theory that predicts habit plane normals solely based on energy minimisation and geometric compatibility .firstly , the one - parameter families of habit plane normals obtained from twins within twins ( see section [ secdoubleshear ] ) , contain normals very close to any .this is at least the case for lattice parameters close to , corresponding to a volume - preserving transformation from fcc to bcc .this regime of parameters is suitable since habit planes are observed in low - carbon steels where the transformation is very nearly fcc to bcc .the resulting one - parameter families of habit plane normals , along with their crystallographically equivalent ones , are shown in figure [ fignorm ] .we stress that the only free parameter in the generation of these normals is which fixes the choice of the shearing systems , based only on the energy minimising property of the microstructure .secondly , out of these one - parameter families of normals , our theory can identify the habit plane normals as those satisfying a criterion of maximal compatibility . to this end , revisiting our construction of twins within twins there is a choice ( cf . ) of using either as an average strain , corresponding to the type i solution from mallard s law , or corresponding to type ii .figure [ figdis ] shows the strain energy associated with the two macroscopic strains as a function of .it is clear that the strain energy of is significantly smaller than that of and is thus preferable .we also note that , in agreement with the previous section , the strain energies of both and increase rapidly as the volume fraction of approaches or , that is as the microstructure reduces to a single twinning system .further , we remark that the with minimal strain energy result in habit plane normals which are very nearly ( see also fig . [ fignorm ] ) . nevertheless , the strain energies of any of the that give rise to normals are lower ( cf .[ figdis ] ) . and .] even though , the strains resulting in habit plane normals do not minimise the strain energy , they satisfy a strong criterion of compatibility . to understandthis one must think in terms of the dynamic process of nucleation .as austenite is rapidly quenched , the martensite phase nucleates at various sites .the strain in a given nucleation site may need to be an ips but otherwise , has no reason to be the same as the strain in any other site . nevertheless , there are essentially only three distinct families of systems of twins within twins and these can be classified by the bain variant which is present in both of the simple twinning systems that comprise the overall microstructure . in figure[ fignorm ] , these three families are distinguished by colour . as the nuclei grow and approach other nuclei , they need to remain compatible with each other .remarkably , the only habit plane normals that arise from deformations with low strain energy and can be reached by all three families are . in figure[ fignorm ] , this can be seen from the fact that all differently coloured curves intersect close to .for any two such regions of twins within twins with corresponding average strains and , one can see that implying that they can meet along a fully coherent planar interface of normal .of course , any nucleus interacts with its neighbours faster than it does with distant nuclei . as a result, blocks of similarly oriented regions of twins within twins ( laths ) may form whose overall orientation may differ from that of other blocks .the theory proposed here for the prediction of habit plane normals has two possible interpretations . on the one hand , it can be seen as a purely macroscopic theory . in particular , it is an instance of a double shear theory with a precise algorithm to produce the required shears based on energy minimisation , without the need for any further assumptions .all possible habit plane normals that can arise from the one parameter family ( indexed by ) of macroscopic deformations are shown in figure [ figfirstsolline ] .table [ tableshear ] then lists the elements of the twinning systems for the values of that produce a near habit plane . with the help of itis easy to convert between the twinning and shearing systems and thus compute the elements and required in a double shear theory . at this macroscopic levelit is not possible to distinguish between twins within twins , a single twin and one slip system , and a single variant and two slip systems . on the other hand , a physical mechanism for the formation of habit plane normalsis proposed and thus a specific morphology on a microscopic level . according to this interpretation , each lathmay itself be a region of twins within twins with a corresponding lath boundary of normal .this type of morphology is depicted in figure [ figdouble ] with being a normal and the other elements are as in table [ tableshear ] .this morphology is a direct consequence of the underlying theory and it would be very interesting if it could be put to experimental scrutiny .l*4c & & & + & ] & ] & ] + & & & research of a. m. leading to these results has received funding from the european research council under the european union s seventh framework programme ( fp7/2007 - 2013 ) / erc grant agreement n . | a mathematical framework is proposed to predict the features of the ( 557 ) lath transformation in low - carbon steels based on energy minimisation . this theory generates a one - parameter family of possible habit planes and a selection mechanism then identifies the ( 557 ) normals as those arising from a deformation with small atomic movement and maximal compatibility . while the calculations bear some resemblance to those of double shear theories , the assumptions and conclusions are different . interestingly , the predicted microstructure morphology resembles that of plate martensite , in the sense that a type of twinning mechanism is involved . msc ( 2010 ) : 74a50 , 74n05 , 74n15 keywords : lath martensite , microstructure , twins within twins , ( 557 ) habit planes , double shear theories , energy minimisation , non - classical interfaces maketitle 2em 1.5em .5em [ cols="^ " , ] 1em date 1.5em |
s. bertone , c. le poncin lafitte , v. lainey and m .- c .angonin syrte - obs .de paris - cnrs / umr8630 , upmc , france + inaf - astronomical observatory of turin , university of turin , italy + imcce - obs . de paris - cnrs / umr8028 , upmc , france e - mail : stefano.bertone.fr deep space data processing during the last decade has revealed the presence of anomalies in the form of unexpected accelerations in the trajectory of probes .the hypothesis made trying to solve this puzzle can be summarized in two main approaches : whether these anomalies are the manifestation of some new physics , or something is mismodeled in the data processing .+ we investigate moyer s book , which describes the relativistic framework used by space agencies for data processing .we know that the ephemeris of a space mission is built from subsequent measures involving the light time of a signal traveling between the earth and the probe and the solution of the inverse problem . since the ephemeris is used for both operational ( space probe navigation ) and scientific goals ( measurements for testing fundamental physics ) , a well defined model is then mandatory for both the interpretation of physical data and the orbit reconstruction . in this article, we suggest an improvement of the light time modeling focusing on the treatment of the so - called `` transponder delay '' .+ this paper is structured as follows .+ in section 2 , we give a brief overview of light time computation as described by the moyer s book ; we show that the transponder s delay ( i.e. the time delay between the reception and retransmission of the light signal on board the satellite ) is not accurately taken into account in this model. in section 3 we present an alternative , more precise , modeling . finally , in section 4 , we compare both modelings to highlight their differences and give some conclusions in section 5 .+ throughout this work we will suppose that space - time is covered by some global barycentric coordinates system , with , being the speed of light in vacuum , a time coordinate and .greek indices run from 0 to 3 , and latin indices from 1 to 3 . here/ represents the position / velocity of body at time , where can take the value ( ground station ) or ( spacecraft ) .primed values are related to the _ moyer s modeling _ , while we will generally use non - primed values for our proposed modeling .deep space navigation is based on the exchange of light signals between a probe and at least one observing ground station .the calculation of a coordinate light time , as resumed from , is quite simple : a clock starts counting as an uplink signal is emitted from ground at .the signal is received by the probe at and then , after a short delay , reemitted towards the earth where it is received by a ground station at .the clock stops counting and gives the round - trip light time where , is the speed of light , is the shapiro delay , while and are the transponder delay and other corrections ( ex : atmospheric delay ... ) that we will not detail here , respectively .the light time is then used to compute two physical quantities : * the _ ranging _ , related to the distance between the probe and the ground station can be computed using * the _ doppler _ , related to the velocity of the probe with respect to the earth , is obtained by differentiating two successive light time measurements , and , during a given count interval .it has been showed that where is a transponder s ratio applied to the downlink signal when it is reemitted towards the earth and .+ since the _ doppler _ signal results from the differentiation of the _ ranging _ signal , all constant or slowly changing terms like and obviously cancel out in this modeling .nevertheless , the electronic delay of some microseconds due to on board processing of the incoming signal requires to consider a different position of the spacecraft at reemission time . in the following ,we study its consequences on light time modeling for _ ranging _ and _ doppler _ calculations .+ for this purpose , we introduce an improved light time model taking into account four events ( one more with respect to moyer s model ) : the emission from the ground station at , the reception by the probe at , the reemission at and the reception at ground at .the additional event accounts for this small delay of ( at least for modern spacecraft ) so that we get similarly to section [ sec : moyermodel ] , we then use to compute _ ranging _ and _ doppler _ observables as to compare the two modelings presented in sections [ sec : moyermodel ] and [ sec : ourmodel ] , we shall define the difference between the computed light times = - = t_1 - t_1 , and .let us then analyze the supplementary event . this term is implicitly related to by the first order development ^sc_3=^sc_2+t ^sc_2 + o ( t^2 ) , the implications of this mismodeling are given by where we used eq . , eq . and eq . into eq . and defining as the minkowskian direction between the ground station and the probe .+ equation highlights the presence of an extra non - constant term , directly proportional to the transponder delay and neglected in moyer s model .this term also depends on the position and velocity of both the probe and the ground station .neglecting it would actually lead to a wrong determination of the epoch and to an error in both _ ranging _ and _doppler_. in order to evaluate the magnitude of the additional term in eq ., we computed ( giving the difference between the _ ranging _ calculated with the two models ) and ( related to the difference of the _ doppler _ calculated by the two models ) for the observation of a probe .we used the real orbit of some probes ( rosetta , near , cassini , galileo ) during their earth flyby , which is a particularly favorable configuration .we used the naif / spice toolkit to retrieve the ephemeris for probes and planets to be used in the computation .computing eq . andit s time derivative for the near probe during its earth flyby on the 23 january 1998 , we found a difference of the order of some for the probe distance calculated by the two models and a difference up to several at the instant of maximum approach for its velocity .these results are shown in figures [ fig : range ] and[fig : dop ] .+ in order to highlight the high variability of the transponder delay effect on doppler measurements , we computed for different probes in different configurations with respect to the observing station .the results are exposed in figure [ fig : dop1 ] and show that this delay can not be simply calibrated at the level of light time calculation nor neglected in the _ doppler _ calculation .it seems obvious from our results that the influence of the transponder delay can not be reduced to a simple calibration without taking some precautions .it is indeed responsible for a tiny effect on the computation of light time and has an impact on both _ ranging _ and _ doppler _ determination .we represent it by a more complete modeling , considering four epochs instead of three . in order to test the amplitude and variability of this effect on real data ,we compute its influence on some real probe - ground station configurations during recent earth flybys ( near , rosetta , cassini and galileo ) .+ the observables calculated using moyer s model and our improved model show differences of the order of several and of for the _ ranging _ and the _ doppler _ , respectively .such an error is acceptable for most operational goals at present .anyway , we shall highlight that this error is directly proportional to the transponder delay and that for past missions , whose data are still largely used for scientific purposes , transponders were more than times slower that today . in the future too , increasing ephemeris precision should be followed by the development of faster transponders or by the use of a more precise model ._ acknowledgements ._ the authors are grateful to the anonymous referees for their detailed review , which allowed to improve the paper .s. bertone and c. le poncin - lafitte are grateful to the financial support of cnrs / gram .j. d. anderson , j. k. campbell , j. e. ekelund , j. ellis , and j. f. jordan . _ anomalous orbital - energy changes observed during spacecraft flybys of earth ._ physical review letters , 100(9):091102 , march 2008 .j. d. anderson , p. a. laing , e. l. lau , a. s. liu , m. m. nieto , and s. g. turyshev ._ indication , from pioneer 10/11 , galileo , and ulysses data , of an apparent anomalous , weak , long - range acceleration ._ physical review letters , 81:28582861 , october 1998 .l. iess , m. di benedetto , n. james , m. mercolino , l. simone , and p. tortora ._ astra : interdisciplinary study on enhancement of the end - to - end accuracy for spacecraft tracking techniques ._ acta astronautica , volume 94 , issue 2 , p. 699 - 707 | during the last decade , the precision in the tracking of spacecraft has constantly improved . the discovery of few astrometric anomalies , such as the pioneer and earth flyby anomalies , stimulated further analysis of the operative modeling currently adopted in deep space navigation ( dsn ) . our study shows that some traditional approximations lead to neglect tiny terms that could have consequences in the orbit determination of a probe in specific configurations such as during an earth flyby . therefore , we suggest here a way to improve the light time calculation used for probe tracking . |
in this paper , we investigate numerically the behavior of granular material at the surface of an asteroid during close approach to the earth .we focus on the specific case of asteroid ( 99942 ) apophis , which will come as close as earth radii on april 13th , 2029 .we provide predictions about possible reshaping and spin - alteration of and surface effects on apophis during this passage , as a function of plausible properties of the constituent granular material .studies of possible future space missions to apophis are underway , including one by the french space agency cnes calling for international partners ( e.g. , michel et al .2012 ) , with the aim of observing this asteroid during the 2029 close encounter and characterizing whether reshaping , spin - alteration , and/or surface motion occur .the numerical investigations presented here allow for estimation of the surface properties that could lead to any observed motion ( or absence of motion ) during the actual encounter .apophis made a passage to the earth at earth radii in early 2013 . at that time, the herschel space telescope was used to refine the determination of the asteroid s albedo and size ( mller et al .according to these observations , the albedo is estimated to be about 0.23 and the longest dimension about m , which is somewhat larger than previous estimates ( m , according to delbo et al .concurrent radar observations improved the astrometry of the asteroid , ruling out the possibility of a collision with the earth in 2036 to better than 1 part in . however , wlodarczyk et al . (2013 ) presented a possible path of risk for 2068 .this finding has put off any crisis by years and makes exploring apophis in 2029 to be more for scientific interest . to date , nothing is known about the asteroid s surface mechanical properties , and this is why its close passage in 2029 offers a great opportunity to visit it with a spacecraft , determine its surface properties , and , for the first time , observe potential modifications of the surface due to tidal effects . and as apophis approaches , it is likely that international interest in a possible mission will increase , since such close approaches of a large object are relatively rare .the case for tidally induced resurfacing was made by binzel et al .( 2010 ; also see demeo et al .2013 ) and discussed by nesvorn et al .( 2010 ) to explain the spectral properties of near - earth asteroids ( neas ) belonging to the q taxonomic type , which appear to have fresh ( unweathered ) surface colors .dynamical studies of these objects found that those bodies had a greater tendency to come close to the earth , within the earth - moon distance , than bodies of other classes in the past kyr .the authors speculated that tidal effects during these passages could be at the origin of surface material disturbance leading to the renewed exposure of unweathered material .we leave a more general and detailed investigation of this issue for future work , but if this result is true for those asteroids , it may also be true for apophis , which will approach earth on friday , april 13 , 2029 no closer than about 29,500 km from the surface ( i.e. , earth radii , or earth radii from the center of the planet ; giorgini et al .it is predicted to go over the mid - atlantic , appearing to the naked eye as a moderately bright point of light moving rapidly across the sky .our aim is to determine whether , depending on assumed mechanical properties , it could experience surface particulate motions , reshaping , or spin - state alteration due to tidal forces caused by earth s gravity field .the classical roche limit for a cohesionless fluid body of bulk density to not be disrupted by tidal forces is earth radii , so we do not expect any violent events to occur during the rocky asteroid s 2029 encounter at earth radii . the presence of granular material ( or regolith ) and boulders at the surface of small bodies has been demonstrated by space missions that visited or flew by asteroids in the last few decades ( e.g. , veverka et al .2000 ; fujiwara et al .it appears that all encountered asteroids to date , from the largest one , the main belt asteroid ( 4 ) vesta by the dawn mission , to the smallest one , the nea ( 25143 ) itokawa , sampled by the hayabusa mission , are covered with some sort of regolith .in fact , thermal infrared observations support the idea that most asteroids are covered with regolith , given their preferentially low thermal inertia ( delbo et al .there even seems to be a trend as a function of the asteroid s size based on thermal inertia measurements : larger objects are expected to have a surface covered by a layer of fine regolith , while smaller ones are expected to have a surface covered by a layer of coarse regolith ( clark et al .this trend is consistent with observations by the near - shoemaker spacecraft of the larger ( km mean diameter ) eros , whose surface is covered by a deep layer of very fine grains , and by the hayabusa spacecraft of the much smaller ( m mean diameter ) itokawa , whose surface is covered by a thin layer of coarse grains .however , interpretation of thermal inertia measurements must be made with caution , as we do not yet have enough comparisons with actual asteroid surfaces to verify that the suggested trend is systematically correct .thus , we are left with a large parameter space to investigate possible surface motion during an earth close approach of an asteroid with unknown surface mechanical properties .our approach is to consider a range of simple and well - controlled cases that certainly do not cover all possibilities regarding apophis surface mechanical properties , but rather aim at demonstrating whether , even in a simple and possibly favorable case for surface motion , some resurfacing event can be expected to occur during the passage .for instance , instead of considering a flat granular surface , we consider a sandpile consisting of a size distribution of spherical grains ( section [ s : initialcondition ] ) and vary the grain properties in order to include more or less favorable cases for motion ( from a fluid - like case to a case involving rough particles ) .slight disturbances may manifest as very - small - scale avalanches in which grain connections readjust slightly , for example .the forces acting on the sandpile are obtained by measuring all `` external '' perturbations during the encounter , including body spin magnitude and orientation changes , for cases in which the global shape remains nearly fixed , and again assuming simple and favorable configurations of the asteroid .indirectly , the encounter may also lead to internal reconfigurations of the asteroid , which in turn produce seismic vibrations that could propagate to the surface and affect the regolith material .these secondary modifications are not modeled here , although it may be possible in future work to account for this by shaking the surface in a prescribed manner . in any case , for this particular encounter , we demonstrate ( section [ s : reshpeff ] ) that any global reconfiguration will likely be small to negligible in magnitude . in the following , we first present , in section [ s : method ] , the numerical method used to perform our investigation , including the initial conditions of the sandpile adopted to investigate surface motion , the representation of the encounter , and the mechanical environment .results are described in section [ s : results ] , including potential reshaping of the asteroid , tidal disturbances for apophis encounter in 2029 , which is a function of the sandpile properties , spin orientation changes , and the dependency of the location of the sandpile on the asteroid to the outcome of the encounter .we also show the responses of the sandpiles for artificially close approaches ( and earth radii ) to demonstrate that our method does predict significant alteration of the sandpiles when this is certainly expected to happen .the investigation is discussed in section [ s : discuss ] and conclusions are presented in section [ s : concl ] .we use ` pkdgrav ` , a parallel -body gravity tree code ( stadel 2001 ) adapted for particle collisions ( richardson et al . 2000 ; 2009 ; 2011 ) .originally collisions in ` pkdgrav ` were treated as idealized single - point - of - contact impacts between rigid spheres .a soft - sphere option was added recently ( schwartz et al .2012 ) ; with this new functionality , particle contacts last many timesteps , with reaction forces dependent on the degree of overlap ( a proxy for surface deformation ) and contact history this is appropriate for dense and/or near - static granular systems with multiple persistent contacts per particle .the code uses a 2nd - order leapfrog integrator , with accelerations due to gravity and contact forces recomputed each step .various types of user - definable confining walls are available that can be combined to provide complex boundary conditions for the simulations .the code also includes an optional variable gravity field based on a user - specified set of rules .the spring / dash - pot model used in ` pkdgrav ` s soft - sphere implementation is described fully in schwartz et al .briefly , a ( spherical ) particle overlapping with a neighbor or confining wall feels a reaction force in the normal and tangential directions determined by spring constants ( , ) , with optional damping and effects that impose static , rolling , and/or twisting friction .the damping parameters ( , ) are related to the conventional normal and tangential coefficients of restitution used in hard - sphere implementations , and .the static , rolling , and twisting friction components are parameterized by dimensionless coefficients , , and , respectively .plausible values for these parameters are obtained through comparison with laboratory experiments ( also see section [ s : initialcondition ] ) .careful consideration of the soft - sphere parameters is needed to ensure internal consistency , particularly with the choice of , , and timestep a separate code is provided to assist the user with configuring these parameters correctly .the numerical approach has been validated through comparison with laboratory experiments ; e.g. , schwartz et al . ( 2012 ) demonstrated that ` pkdgrav ` correctly reproduces experiments of granular flow through cylindrical hoppers , specifically the flow rate as a function of aperture size , and found the material properties of the grains also affect the flow rate .also simulated successfully with the soft - sphere code in ` pkdgrav ` were laboratory impact experiments into sintered glass beads ( schwartz et al .2013 ) , and regolith , in support of asteroid sampling mechanism design ( schwartz et al .2014 ) .we use a two - stage approach to model the effect of a tidal encounter on asteroid surface particles ( regolith ) .first , a rigid ( non - deformable ) object approximating the size , shape , and rotation state of the asteroid is sent past the earth on a fixed trajectory ( in the present study , the trajectory is that expected of ( 99942 ) apophis see section [ s : representme ] ; note that the actual shape of apophis is poorly known beyond an estimate of axis ratios , so we assume an idealized ellipsoid for this study ) . all forces acting at a target point designated on the object surface are recorded ( section [ s : representme ] ) .then , a second simulation is performed in the local frame of the target point , allowing the recorded external forces to affect the motion of particles arranged on the surface ( in the present study we consider equilibrated sandpiles ) .this two - stage approach is necessary due to the large difference in scale between the asteroid as a whole and the tiny regolith particles whose reactive motion we are attempting to observe .we approximate the regolith , which in reality likely consists of a mixture of powders and gravel ( clark et al .2002 ) , by a size distribution of solid spheres .we mimic the properties of different materials by adjusting the soft - sphere parameters ( section [ s : initialcondition ] ) .the soft - sphere approach permits simulation of the behavior of granular materials in the near - static regime , appropriate for the present case in which the regolith particles generally remain stationary for hours and suffer a rapid disturbance only during the moments of closest approach to the planet .in particular , the model permits a detailed look at the responses of local individual particles even when the tidal effects are too weak to cause any macroscopic surface or shape changes ; this gives insight into the limit of tidal resurfacing effects . in order to assess the effect of tidal encounters on a small surface feature, we carried out numerical simulations in a local frame consisting of a flat horizontal surface ( i.e. , the local plane tangential to the asteroid surface at the target point ) with a `` sandpile '' resting on top .the sandpile consists of = 1683 simulated spherical particles with radii drawn from a power - law distribution of mean cm and cm width truncated ( minimum and maximum particle radii and cm , respectively ) . between the sandpile and the floor is a rigid monolayer of particles drawn from the same distribution and laid down in a close - packed configuration in the shape of a flat disk .this rigid particle layer provides a rough surface for the sandpile to reduce spreading ( fig .[ f : sandpile ] ) .three different sandpiles were constructed using the material parameters described below .particles were dropped though a funnel from a low height onto the rough surface to build up the sandpile .these sandpiles were allowed to settle until the system kinetic energy dropped to near zero .this approach eliminates any bias that might arise from simply arranging the spheres by hand in a geometrical way ; the result should better represent a natural sandpile .figure [ f : sandpile ] . for this study , we compared three different sets of soft - sphere parameters for the sandpile particles ( table [ t : ssdemparams ] ) .our goal was to define a set of parameters that spans a plausible range of material properties given that the actual mechanical properties of asteroid surface material are poorly constrained . in the specific case of apophis , very little is known beyond the spectral type , sq ( binzel et al .there are no measurements of thermal properties that might give an indication of the presence or absence of regolith on apophis .consequently , we chose three sets of material parameters that span a broad range of material properties .the first set , denoted `` smooth '' in the table , consists of idealized frictionless spheres with a small amount of dissipation ( 5% , chosen to match the glass beads case ) .this is about as close to the fluid case that a sandpile can achieve while still exhibiting shear strength arising from the discrete nature of the particles ( and the confining pressure of surface gravity ) .it is assumed this set will respond most readily to tidal effects due to the absence of friction between the particles and the non - uniform size distribution .the second set , `` glass beads , '' is modeled after actual glass beads being used in a set of laboratory experiments to calibrate numerical simulations of granular avalanches ( richardson et al .2012 ) . in this case was measured directly , which informed our choice for ( schwartz et al .2012 ) , and and were inferred from matching the simulations to the experiments .the glass beads provide an intermediate case between the near - fluid smooth spheres and the third parameter set , denoted `` gravel . ''table [ t : ssdemparams ] .the gravel parameters were arrived at by carrying out simple avalanche experiments using roughly equal - size rocks collected from a streambed . in these experiments , the rocks ( without sharp edges ) were released near the top of a wooden board held at a 45 incline .the dispersal pattern on the stone floor was measured , including the distance from the end of the board to approximately half of the material , the furthest distance traveled by a rock along the direction of the board , and the maximum angle of the dispersal pattern relative to the end of the board ( fig .[ f : rockslide]a c ) .a series of numerical simulations was then performed to reproduce the typical behavior by varying the soft - sphere parameters ( fig .[ f : rockslide]d f ) .the values used in table [ t : ssdemparams ] for `` gravel '' were found in this way .the and values are quite large , reflecting the fact that the actual particles being modeled were not spheres .a correspondingly smaller timestep is needed to adequately sample the resulting forces on the particles in the simulations .the value of was measured by calculating the average first rebound height of sample gravel pieces that were released from a certain height ; was not measured but since the rocks were rough it was decided to simply set .this exercise was not meant to be an exhaustive or precise study ; rather , we sought simply to find representative soft - sphere parameters that can account plausibly for the irregularities in the particle shapes . in any case , we will find that such rough particles are difficult to displace using tidal forces in the parameter range explored here , so they provide a suitable upper limit to the effects .similarly , we do not consider cohesion in this numerical study , which would further resist particle displacement due to tidal forces .figure [ f : rockslide ] . a new adaptation of ` pkdgrav ` developed in this work allows for the simulation of sandpiles located on an asteroid surface , based on the motion equations of granular material in the noninertial frame fixed to the chosen spot .detailed mechanics involved in the local motion of the sandpile during a tidal encounter were considered and represented in the code , including the contact forces ( reaction , damping , and friction between particles and/or walls ) and the inertial forces derived from an analysis of the transport motion of a flyby simulation . herewe provide a thorough derivation of the relevant external force expressions .figure [ f : frames ] .figure [ f : frames ] illustrates four frames used in the derivation , the space inertial frame ( spc ) , mass center translating frame ( mct ) , body fixed frame ( bdy ) and local frame ( loc ) .( [ e : posspc2loc ] ) gives the connection between the spatial position and the local position of an arbitrary particle in the sandpile , for which is the vector from spc s origin to the particle , is the vector from spc to mct / bdy , is the vector from mct / bdy to loc , and is the vector from loc to the particle : equations ( [ e : velspc2loc])([e : accspc2loc ] ) are derived by calculating the first- and second - order time derivatives of eq .( [ e : posspc2loc ] ) , which denote the connections of velocity and acceleration between the spatial and local representations , respectively . indicates the angular velocity vector of the asteroid in spc or mct .the operator denotes the time derivative with respect to the inertial frame spc , while denotes the time derivative with respect to the body - fixed frame bdy or loc : +\frac{\tilde \mathrm{d}}{\mathrm{d } t } { { \mbox{\boldmath{ } } } } \times ( \mathbf{l}+\mathbf{r})+2 { { \mbox{\boldmath{ } } } } \times \frac{\tilde \mathrm{d}}{\mathrm{d } t } \mathbf{r}+\frac{\tilde \mathrm{d}^2 } { \mathrm{d } t^2 } \mathbf{r}.\ ] ] the dynamical equation for an arbitrary particle in the inertial frame spc can be written as eq .( [ e : spcdyneqn ] ) , which shows that the forces a particle in the sandpile feels can be grouped into three categories : denotes the local gravity from the asteroid ; denotes the attraction from the planet ; and represents all the contact forces coming from other particles and walls : we get the corresponding dynamical equation in the local frame loc ( eq .( [ e : locdynequ ] ) ) by substituting eq .( [ e : accspc2loc ] ) into eq .( [ e : spcdyneqn ] ) , -\frac{\tilde \mathrm{d}}{\mathrm{d } t } { { \mbox{\boldmath{ } } } } \times ( \mathbf{l}+\mathbf{r})-2 { { \mbox{\boldmath{ } } } } \times \frac{\tilde \mathrm{d}}{\mathrm{d } t } \mathbf{r}.\ ] ] the terms on the right - hand side of eq .( [ e : locdynequ ] ) represent the mechanical environment of a particle in the local sandpile during a tidal encounter , and can be interpreted as follows . is the reaction force from the walls and surrounding particles , which is directly exported by ` pkdgrav ` . is approximated by a constant value at the loc s origin that acts as the uniform gravity felt throughout the whole sandpile , since the latter s size is negligible compared with the asteroid dimensions .the difference between the planetary attraction and the translational transport inertia force , , is the tidal force that plays the primary role of moving the surface materials .the term ] roughly describes the mass distribution of the body ( 0=oblate , 1=prolate ) .the initial rubble - pile model ( equilibrated , prior to the encounter ) had , and its relative change , defined by , was measured for several simulations parameterized by bulk density and constituent material type ( table [ t : fullscalechange ] ) .the corresponding roche limits are provided in the table , with the values estimated for the case of a circular orbit .the reshaping effects show consistent results for all three types of materials , varying between a noticeable change in shape ( magnitude ) and a negligible change in shape ( magnitude ) , with the sharpest transition occurring for a critical bulk density with corresponding roche limit of .this agrees with the analysis of apophis disruption limit using a continuum theory for a body made up of solid constituents ( holsapple et al .note we find that even a bulk density as low as did not dislodge any particles enough for them to end up in a new geometrical arrangement , due to the very short duration of the tidal encounter .figure [ f : newpile ] .table [ t : fullscalechange ] . since idealized equal - size spheres can arrange into crystal - like structures that may artificially enhance the shear strength of the body ( tanga et al .2009 ; walsh et al .2012 ) , we carried out a second suite of simulations with rubble piles made up of a bimodal distribution of spheres ( fig .[ f : newpile ] ) .the model consisted of big particles of radius m and small particles of radius m , bounded by the same tri - axial ellipsoid as shown in fig . [f : rubblepile ] , and arranged randomly .as before , the bulk density and material type were varied , with notable differences in outcome compared to the equal - size - sphere cases ( table [ t : fullscalechange ] ) .particles using the smooth " parameter set are not able to hold the overall shape of apophis ( the 1.4:1.0:0.8 tri - axial ellipsoid ) ; instead the rubble pile collapses and approaches a near - spherical shape . for the other two parameter sets , however , the overall shape of the rubble pile is maintained .figure [ f : comp ] shows the relative change of shape factor as a function of bulk density for both types of simulations .as shown , the results from the unimodal and bimodal rubble piles present similar trends : before reaching the roche limit density , the reshaping remains small ( the magnitudes are with a certain amount of stochasticity ) ; after that , the reshaping effect sharply increases to around .the bimodal rubble piles show less resistance to the tidal disturbance , since they have less - well - organized crystalline structures compared to the unimodal cases ; they therefore exhibit larger shape modifications ( walsh et al .2012 ) . figure [ f : comp ] .regardless , catastrophic events were not detected for either type of rubble pile until the bulk density was as small as , for all material parameter sets .therefore , the reshaping effects on apophis in 2029 should be negligibly small for bulk densities in the likely range ( ) .even so , minor internal reconfigurations resulting from the tidal encounter may produce seismic waves that could propagate and affect the configuration of surface regolith .since we are only looking at localized areas on the surface , isolated from the rest of the body , vibrations emanating from other regions , in or on the asteroid , are not evaluated in the current work , although again we expect these to be small or non - existent for the specific case of the apophis 2029 encounter .only the external " forces acting directly on the surface particles in the considered localized region are taken into account .we will look into the effects of seismic activity , which may stem from other regions of the asteroid , on an actual high - resolution rubble pile in future work . in this study ,we focus on external forces , outside of the rubble pile itself ; the rigid rubble pile model is employed as a reasonable simplification for the purpose of measuring the external forces on a surface sandpile .the right - hand side of eq .( [ e : locdynequ ] ) provides a description of the constituents of the mechanical environment experienced by a sandpile particle , including local gravity , tidal force , centrifugal force , ltif , and the coriolis effect .note the coriolis force does not play a role before particle motion begins , so it can be ignored when examining the causes of avalanches / collapses .the centrifugal force and ltif show weak dependence on local position , so these terms can be simplified by substituting , since the dimension of the sandpile is much smaller than that of the asteroid .this leads to an approximation of the resultant environmental force ( eq . ([ e : envirforce ] ) ) , which provides a uniform expression of the field force felt throughout the sandpile : the variation of shows a common pattern for different encounter trajectories and different locations on the asteroid : it stays nearly invariable for hours as the planet is approaching / departing and shows a rapid single perturbation around the rendezvous .this feature enables us to evaluate the mechanical environment by measuring the short - term change of , and moreover , to make a connection between these external stimuli with the responses of sandpiles in the simulations .spherical coordinates are used to represent the resultant force ( fig .[ f : accs ] ) , in which is the magnitude and and denote the effective gravity slope angle and deflection angle , respectively .figure [ f : accs ] .the effects of the environment force must be considered in the context of the sandpile modeling .a rough approximation of the catastrophic slope angle ( eq . ( [ e : critcang ] ) ) can be derived from the conventional theory of the angle of repose for conical piles ( brown et al .1966 ) , namely that avalanches will occur when the effective gravity slope angle exceeds : here denotes the resting angle of the ( conical ) sandpile , assumed to be less than the repose angle , and denotes the critical slope angle , which , if exceeded by the effective gravity slope angle , results in structural failure of the sandpile , i.e. , an avalanche. essentially , the avalanches of the sandpile depend primarily on the instantaneous change of the slope angle of the resultant force and should be only weakly related to the magnitude and deflection angle .the global distribution of changes of were examined for the duration of the encounter ( see below ) .it would be difficult to make an exhaustive search over all asteroid encounter orientations due to the coupling effects between the asteroid s rotation and the orbital motion .instead , we chose representative trajectories along the symmetry axes of the apophis model ( at perigee ) for study , which serves as a framework for understanding the influence of encounter orientation and the strength of tidal disturbance at different locations .( the technique used to match the orientation of the asteroid at perigee is to first run a simulation in reverse starting at perigee with the required orientation , thereby providing the correct asteroid state for a starting position far away . )figure [ f : traject ] shows the trajectories along three mutually perpendicular planes , including both the prograde and retrograde cases .this choice is based on the symmetry of the dynamical system , and all trajectories have a speed of km / s at a perigee distance of .figure [ f : traject ] .figure [ f : traject ] presents these representative trajectories in order , each corresponding to a special possible orientation of the encounter .the effective gravity slope angle changes throughout the surface of the tri - axial ellipsoid model were recorded for each trajectory , with particular attention to the location and time of the maximum change .we found that these maximum changes concentrate at several minutes around perigee .figure [ f : maxcsa ] shows the maximum values of slope angle change along these trajectories , indicating that the largest perturbation on slope angle is less than .we verified that the achievable range of the effective gravity slope angle during the encounter ( section [ s : representme ] ) is within the safe limit predicted by eq .( [ e : critcang ] ) for all three sandpiles ( section [ s : initialcondition ] ) ; that is , a slope change below is not enough to trigger any massive avalanches throughout the sandpiles .figure [ f : maxcsa ] .figure [ f : maxcsa ] also shows a general dependence of the tidal perturbation on the orientation of the encounter trajectory , namely that the three step levels seen in the figure correspond to three directions of the relative planet motion at perigee .trajectories correspond to the direction of the body long axis , which leads to relatively strong tidal effects ; trajectories correspond to the direction of the intermediate axis , which leads to moderate effects ; while trajectories correspond to the direction of the short axis , which leads to relatively weak tidal effects .figure [ f : patterns ] illustrates the distribution of effective gravity slope angle change at perigee for the three trajectory sets , with a denoting trajectories , b denoting trajectories , and c denoting trajectories .we confirm that the four trajectories in each set present visually the same distribution around perigee , therefore we chose the patterns due to trajectories 1 , 5 , and 9 for demonstration of sets a , b , and c , and generalize these three patterns as representative .several points can be inferred from fig .[ f : patterns ] .first , the direction to the planet at perigee largely determines the global distribution of tidal perturbation , that is , the strongest effects tend to occur along the long axis and the weakest effects along the short axis .second , at perigee , the largest slope change occurs near areas surrounding the pole for the most favorable orientation ( set a ) , while the largest slope change at the pole occurs several minutes before or after perigee ( see fig . [f : pxpypz ] ) .these maximum slope changes are about equal in magnitude ( the change is only slightly smaller at the pole compared to the area immediately surrounding it , but not enough to make a difference to the avalanches ) , so for simplicity we just use the poles themselves as our testing points .third , the duration of strong tidal effects depends on the eccentricity of the encounter trajectory .we checked the trajectories shown in fig .[ f : traject ] and found the duration of strong perturbation , defined for illustration as the period when the force magnitude stays above of the peak value , is tens of minutes long ; correspondingly , the responses of the surface material are also transitory and weak .figure [ f : patterns ] .the three poles p , p and p ( see figure [ f : traject ] ) were chosen as the test locations for the sample sandpiles since the effective gravity slope angle is near zero at these locations , which enables the sandpiles to hold their initial shapes . figure [ f : pxpypz ] illustrates the time variation of slope angle at the poles during the encounter .the results derived from all trajectories in fig . [ f : traject ] are shown and to the same scale .the most perturbed pole is p on the medium axis , which gains the largest slope angle change for most cases , except in trajectories \{3,4,9,10}. it is notable that trajectories in the same plane always share similar variation at the three poles , such as \{1,2,5,6 } , \{3,4,9,10 } , and \{7,8,11,12}. the curves at p and p show similar doublet shapes since those poles are both located in the rotational plane .the variation at p is more complicated due to the fact that the spin pole is not perpendicular to the orbit plane .figure [ f : pxpypz ] .figure [ f : pxpypz ] can be used to estimate the changes in the mechanical environment at these pole locations . since the slope angle change is the primary mechanism to drive avalanches on the sandpiles ,the results derived from different trajectories serve as a framework to determine the magnitude of tidal perturbation at the candidate locations and locations in between , which turns out to be quite small for the apophis encounter ( less than about in slope for these cases ) .as analyzed above , catastrophic avalanches solely due to external forces acting directly on the surface particles may never occur on apophis during the 2029 encounter since the tidal perturbation will be very weak , however it might still have the potential to trigger some local tiny landslides of the surface materials . generally , the regolith experiences more activity than the constituents deep in the asteroid due to the dynamical environment on the surface , including the microgravity , maximum tidal and centrifugal acceleration , and smaller damping forces from the surroundings due to the smaller confining pressure , therefore we conclude that resurfacing due to external perturbations should occur before wholesale reshaping during the encounter ( discussed in section [ s : reshpeff ] ) . in the following sections we detail our numerical examination of the local sandpiles for the predicted apophis encounter scenario to estimate the limit and magnitude of the material responses in 2029 .one problem that must be confronted in the numerical simulations is that the soft - sphere sandpiles exhibit slow outward spreading due to the accumulation of velocity noise , and this slow spreading may eventually lead to a collapse of the whole sandpile .unfortunately , our integration time required is long ( hours ) for a granular system , thus the numerical noise has to be well limited in our method .two techniques are used in this study .first , as described in section [ s : initialcondition ] , a rough pallet made up of closely packed spheres in a disk fixed to the ground is used to reduce spreading of the bottom particles .second , we introduce a critical speed in the code , below which all motions are considered to be noise and are forced to zero . we found a speed threshold m / s by launching hours - long simulations of a static sandpile for different material properties and finding the corresponding minimum value of the critical speed that limits the numerical spreading .simulations of sandpiles without a flyby were carried out first , serving as a reference for subsequent flyby simulations .sandpiles in this situation were confirmed to stay equilibrated for a long period , which suggests the avalanches in our numerical experiments ( if any ) would be attributed entirely to the effects of the tidal encounter .we performed local simulations with the sandpiles generated in section [ s : initialcondition ] to consider the response of different materials to the apophis encounter .the local frames at the three poles p , p and p ( fig .[ f : traject ] ) were used to position the sample sandpiles , and the flyby simulation data derived from the fiducial trajectories were employed as the source of external perturbations for the local simulations .there are possible combinations in total for different materials , locations , and trajectory orientations , covering a wide range of these undetermined parameters . in all the simulations , the start distance of apophis from the center of the planet was set to to ensure the sandpiles were equilibrated fully before it approached the perigee . for our study , we concentrated on the connections between the sandpile s responses and significant variables , e.g. , spin orientation and sandpile locations , by which we will get a better sense of the surface processes due to a tidal encounter .the sandpiles located at pole p for trajectory 2 ( fig .[ f : traject ] ) were examined in detail , which is the case suffering the strongest tidal perturbation ( fig .[ f : pxpypz ] ) .figure [ f : avalanches ] illustrates the disturbances on sandpiles of three different materials for this configuration .each panel includes a snapshot of the sandpile after the tidal encounter with the displaced particles highlighted and a diagram showing the time variations of the total kinetic energy with planetary distance .intensive events can be identified by noting the peaks of the kinetic energy curves , and the particles involved in these events are marked with different colors to show the correspondence between the disturbed site and occurrence time .figure [ f : avalanches ] .the scale of the disturbances proves to be tiny even for the strongest " case in fig .[ f : avalanches ] . as illustrated , the displaced particles are few in number and mostly distributed on the surface of the sandpile .we found that the maximum displacement of a given particle is less than times its radius ; that is , these disturbances only resulted from the collapse of some local weak structures and the small chain reaction among the surrounding particles .moreover , the gravel " sandpile proves to be unaffected by the tidal encounter , because the static friction is quite large ( table [ t : ssdemparams ] ) and all particles are locked in a stable configuration , in which case the total kinetic energy of the sandpile remains very small ( j ) .the glass beads " sandpile suffered small but concentrated disturbances that involved very few particles .accordingly , the displacements of these particles are relatively large .smooth " sandpile presents near - fuild properties with many surface particles experiencing small - amplitude sloshing .the motion eventually damps out and the displacements of the involved particles are tiny ( smaller than that of glass beads " ) .based on the detailed analysis of this representative scenario , we performed simulations at other poles and at other orientations , constructing a database to reveal any connections between disturbances and these parameters .however , the results show little association between these two : for the gravel " sandpile , no disturbances were detected for any location and any orientation due to the large ; for the glass beads " sandpile , some small - scale avalanches can always be triggered , while the occurrence ( site and time ) of these avalanches seems to be widely distributed and independent of location and orientation ; and for the smooth " sandpile , surface particles can feel the tidal perturbation and show slight sloshing , but no visible avalanches eventually resulted , because the initial low slope angle in this case imposes a relatively stable structure that is always able to recover from the external perturbations .to illustrate the effect of stronger perturbations on our model sandpiles , we carried out a few simulations of closer approaches , specifically at and , for the same encounter speed of km / s .these scenarios , though unlikely to occur in 2029 based on current understanding of apophis orbit , are presented here to illustrate the magnitudes of tidal resurfacing effects over a wider range of perturbation strengths .flyby simulations of rubble piles were first carried out to quantify the reshaping effects , using the monodisperse and bidisperse particle rubble - pile models ( see section [ s : reshpeff ] ) .the bulk densities for both models were set to g / cc .table [ t : reshp24 ] presents the relative net changes of shape factor for these rubble piles of different materials at different perigee distances .table [ t : reshp24 ] .table [ t : reshp24 ] shows results consistent with section [ s : reshpeff ] , namely that the bimodal rubble pile shows greater fluidity and larger shape changes during the encounter . to be more specific , the magnitude of reshaping effects at perigee distance ( still larger than the roche limit of for a fluid body ) remains small ( ) , and that at perigee distance ( smaller than the roche limit ) achieves a significant level of .it is notable that the rubble piles did not experience any irreversible distortion even for an approach distance as close as , because the duration of strong tidal effects is relatively short ( see section [ s : reshpeff ] ) .anyway , in this section we still adopt the assumption of rigidity to measure the quantities required by the local simulations , which is acceptable since these scenarios are fictitious and only designed to exhibit some massive resurfacing effects . for the same reason , in this section we do not present systematic local simulations for different sandpile locations and orientations as done for the real encounter scenario ; instead , we simply place the sample sandpiles ( see fig . [f : sandpile ] ) at the pole of the body long axis p , and choose trajectory ( see fig .[ f : traject ] ) as the encounter trajectory to give rise to the maximum tidal effects .figures [ f : refrsh4f ] and [ f : refrsh2f ] illustrate the responses of the three sandpiles for perigee distance and , respectively .each panel includes snapshots of the sandpile before and after the tidal encounter , and the overall shape of the sandpile is traced with a white border for emphasis .diagrams showing the time variation of the total kinetic energy with planetary distance are also included , in which the intensive avalanches can be identified by noting the peaks of the kinetic energy curves ( note the different vertical scales ) .figure [ f : refrsh4f ] .[ f : refrsh2f ] . as illustrated in fig .[ f : refrsh4f ] , the shape changes of the three sandpiles are still small ( but visible ) for the encounter at perigee , and involve many more particles than the encounter at ( fig . [ f : avalanches ] ) . andaccordingly , the magnitudes of the total kinetic energy increase consistently for the three sandpiles of different materials , except for the gravel case at , where it appears the frictional `` lock '' established when the pile was first created is disturbed enough to cause a stronger distortion than might otherwise be expected , causing a sharp spike in the kinetic energy plot . essentially , the `` gravel '' sandpile experienced a more significant collapse than the `` glass bead '' sandpile .as stated in section [ s : initialcondition ] , the three sandpiles were constructed in a manner that allowed for some inherent randomness , thus their energy state and strength may differ to some degree , and appearently the `` gravel '' sandpile was closer to its failure limit than the other two .this adds an extra element of stochasticity to the results , an aspect to be explored in future work .figure [ f : refrsh2f ] shows the results from the encounter at perigee , for which the encounter trajectory partly entered the roche sphere ( ) . in this case , the shapes of the three sandpiles are highly distorted during the encounter , with the involved particles slumped towards the direction where the planet is receding .the corresponding changes in total kinetic energy become extremely large when the massive avalanches occur , especially for the `` smooth '' particles .the results of this section suggest that a encounter may alter the regolith on apophis surface slightly , and a encounter may produce a strong resurfacing effect ( of course , we would expect considerable global distortion as well , if the asteroid is a rubble pile ) .the shape changes of the sandpiles depend on the orientation of the encounter trajectory ; i.e. , particles can be dragged away along the direction in which the planet recedes .the argument about whether a terrestrial encounter can reset the regolith of neas has been discussed for a while ( e.g. , binzel et al .2010 , nesvorn et al .important evidence is provided by measurements of the spectral properties , which suggest that asteroids with orbits conducive to tidal encounters with planets show the freshest surfaces , while quantitative evaluation of this mechanism is still rare and rough .the two - stage approach presented in this paper enables the most detailed simulations to date of regolith migration due to tidal effects , which we have applied to make a numerical prediction for the surface effects during the 2029 approach of ( 99942 ) apophis .we confirm that the shape modification of apophis due to this encounter will likely be negligibly small based on the results of systematic simulations parameterized by bulk density , internal structure , and material properties .the analysis of the global mechanical environment over the surface of apophis during the 2029 encounter reveals that the tidal perturbation is even too small to result in any large - scale avalanches of the regolith materials , based on a plausible range of material parameters and provided that external forces acting directly on the surface dominate any surface effects due to seismic activity emanating from other regions of the body .future work will explore these second - order surface effects and their relevance to this encounter , and to cases of tidal resurfacing of small bodies in general .it is notable that our approach is capable of capturing slight changes in the regolith ( modeled as sandpiles ) ; thus , through numerical simulation , we find that this weak tidal encounter does trigger some local disturbances for appropriate material properties .these possible disturbances , though local and tiny , are essentially related to the nature of the sandpile , which is actually a bunch of rocks and powders in a jammed state .the sandpile is formed by a competition between the constituents to fall under gravity and eventually reach an equilibrium with mutual support .it can hold a stable shape under a constant environment force at low sandpile densities ( compared with close packing ) , which is primarily due to interior collective structures , called bridges , that are formed at the same time as the sandpile and are distributed non - uniformly throughout the sandpile ( mehta 2007 ) .the soft - sphere method provides a detailed approximation to a real sandpile in terms of the granular shape and contact mechanics , which can well reproduce the bridges that dominate the structure of the sandpile .accordingly , we can describe the nuanced responses due to the changes in section [ s : gcme ] and fig .[ f : accs ] .changing the slope angle is the most efficient way to break the equilibrium of substructures in the sandpile , while changing the magnitude and deflection angle may also play a role in the causes of avalanches .changes in can readjust the interparticle overlaps and cause collapse of some bridges , which is a principal mechanism of compaction by filling the voids during small structural modifications ( mehta et al .this dependence was recently demonstrated in experiments of measuring the angle of repose under reduced gravity ( kleinhans et al .similarly , the weak links in bridges of the sandpile may also be disturbed and broken during the sweeping of in the plane .in addition , mehta et al .( 1991 ) pointed out that even slight vibration caused by the environment force may result in collapse of long - lived bridges , which is also responsible for the sandpile landslides during the encounter .although the magnitudes of the avalanches are different for the three materials we tested , they are nonetheless all very small .we note the occurrence of these local avalanches shows little dependence on the orientation / location of the sandpile , because all the perturbations are small in magnitude and the sandpile behavior , these small - scale avalanches , actually depends on the presence of weak bridges inside the sandpile , which in turn could be sensitive to the changes of the environmental force s magnitude , deflection angle , and slope angle in a somewhat random way . in any case, this numerical study shows that tidal resurfacing may not be particularly effective at moderate encounter distances .we predict that overall resurfacing of apophis , regolith will not occur if the only source of disturbance is external perturbations .mini - landslides on the surface may still be observed by a visiting spacecraft with sufficiently sensitive monitoring equipment .this provides a great motivation for exploration of apophis in 2029 ( michel et al .this paper provides a numerically derived prediction for the surface effects on ( 99942 ) apophis during its 2029 approach , which is likely to be one of the most significant asteroid encounter events in the near future .a two - stage scheme was developed based on the soft - sphere code implementation in ` pkdgrav ` to mimic both a rubble pile s ( rigid and flexible ) responses to a planetary flyby and a sandpile s responses to all forms of perturbations induced by the encounter .the flyby simulations with the rubble pile indicate that reshaping effects due to the tidal force on apophis in 2029 will be negligibly small for bulk densities in the expected range ( ) .the resultant environmental force felt by the sandpile on the asteroid surface was approximated with a uniform analytical expression , which led to an estimate of the changes in the global mechanical environment .three typical patterns of perturbation were presented based on the asteroid body and spin orientation at perigee , showing a general dependence of the magnitude of tidal perturbation on the orientation of the trajectory .twelve fiducial trajectories were used to calculate the magnitude of the tidal perturbation at three poles of the tri - axial ellipsoid model , indicating that the strongest tidal perturbation appears where the local slope is originally steep and that the duration of the strong perturbation is short compared with the whole process .the tidal perturbation on surface materials is confirmed to be quite weak for the 2029 encounter , therefore large - scale avalanches may never occur .however , we showed this weak perturbation does trigger some local tiny landslides on the sample sandpiles , though the involved particles are few in number and are distributed on the surface of the sandpile .these small - scale avalanches result from the breaking of weak substructures by slight external perturbations , therefore the occurrence of these local avalanches is widely distributed and presents little dependence on the encounter parameters .the simulations of closer approaches show that an encounter at is capable of triggering some massive avalanches of the sandpiles , i.e. , to alter the regolith on apophis surface significantly ( although the entire body would also undergo significant shape change in this case ) .further research will be performed to generalize our work over a wide range of possible asteroid - planet encounter conditions . and as stated above , we will also investigate whether even slight internal perturbations in the asteroid during tidal encounters may contribute to noticeable surface motions .this material is based on work supported by the us national aeronautics and space administration under grant nos .nnx08am39 g , nnx10aq01 g , and nnx12ag29 g issued through the office of space science and by the national science foundation under grant no .some simulations in this work were performed on the yorp " cluster administered by the center for theory and computation in the department of astronomy at the university of maryland .patrick michel acknowledges support from the french space agency cnes .ray tracing for figs.1 , 2 , 4 , 5 and 12 was performed using pov - ray ( http://www.povray.org/ ) .1999 catastrophic disruptions revisited icarus 142 520 2009 spectral properties and composition of potentially hazardous asteroid ( 99942 ) apophis icarus 200 480485 2010 earth encounters as the origin of fresh surfaces on near - earth asteroids nature 463 331334 1966 principles of powder mechanics oxford : pergamon 2007 albedo and size determination of potentially hazardous asteroids : ( 99942 ) apophis icarus 188 266269 2013 mars encounters cause fresh surfaces on some near - earth asteroids icarus 227 112122 2006 the rubble - pile asteroid itokawa as observed by hayabusa science 312 13301334 2008 predicting the earth encounters of ( 99942 ) apophis icarus 193 119 2006 tidal disruptions : a continuum theory for solid bodies icarus 183 331348 2004 numerical determination of stability regions for orbital motion in uniformly rotating second degree and order gravity fields planetary and space science 52 685692 2010 fragment properties at the catastrophic disruption threshold : the effect of the parent body s internal structure icarus 207 5465 2011 static and dynamic angles of repose in loose granular materials under reduced gravity journal of geophysical research 116 e11004 2010 do planetary encounters reset surfaces of near earth asteroids ?icarus 209 510519 2007 granular physics cambridge university press 1991 vibrated powders : a microscopic approach physics review letter 67 394397 2004 cooperativity in sandpiles : statistics of bridge geometries journal of statistical mechanics : theory and experiment p10014 2001 collisions and gravitational reaccumulation : forming asteroid families and satellites science 294 16961700 2012 probing the interior of asteroid apophis : a unique opportunity in 2029 special session 7 of the xxviii iau general assembly , 2931 august , 2012 , beijing ( china ) , abstract book 1998 tidal distortion and disruption of earth - crossing asteroids icarus 134 4776 2000 direct large - scale -body simulations of planetesimal dynamics icarus 143 4559 2009 numerical simulations of asteroids modeled as gravitational aggregates planetary and space science 57 183192 2011 numerical simulations of granular dynamics : i. hard - sphere discrete element method and tests icarus 212 427437 2012 numerical simulations of landslides calibrated against laboratory experiments for application to asteroid surface processes american astronomical society , dps meeting 44 105.06 2005 abrupt alteration of asteroid 2004 mn4 s spin state during its 2029 earth flyby icarus 178 281283 2012 an implementation of the soft - sphere discrete element method in a high - performance parallel gravity tree - code granular matter 14 363380 2013 numerically simulating impact disruptions of cohesive glass bead agglomerates using the soft - sphere discrete element method icarus 226 6776 2014 low - speed impact simulations into regolith in support of asteroid sampling mechanism design i. : comparison with 1-g experiments planet .space sci .2001 cosmological -body simulations and their analysis university of washington 141 2009 rubble - pile reshaping reproduces overall asteroid shapes the astrophysical journal 706 197202 2013 improved astrometry of ( 99942 ) apophis acta astronautica 90 5671 2000 near at eros : imaging and spectral results science 289 20882097 2012 spin - up of rubble - pile asteroids : disruption , satellite formation , and equilibrium shapes icarus 220 514529 2013 the potentially dangerous asteroid ( 99942 ) apophis monthly notices of the royal astronomical society 434(4 ) 30553060 .soft - sphere parameter sets used to represent three different material properties of sandpiles in the simulations . here is the coefficient of static friction ( , where is the material friction angle ) , is the coefficient of rolling friction , is the normal coefficient of restitution ( 1 = elastic , 0 = plastic ) , is the tangential coefficient of restitution ( 1 = no sliding friction ) , is the normal spring constant ( ) , and is the tangential spring constant ( also ) . [cols="^,^,^,^",options="header " , ]* figure [ f : sandpile ] : * snapshots of sandpiles constructed using three different materials , which are ( a ) gravel , " ( b ) glass beads , " and ( c ) smooth , " with corresponding soft - sphere parameters listed in table [ t : ssdemparams ] .the same constituent spheres are used in each sandpile both for the free particles ( green ) and the rigid particles ( white ) .the values of average slope and pile height after equilibrium is achieved are indicated in the snapshots .snapshots of the avalanche experiment and corresponding simulations . in the experiments , similar - size gravel pieces were piled up on the slope each time and released all at once by removing the supporting board .snapshots from both the experiment and simulation include frames from the beginning ( a , d ) , middle ( b , e ) , and end ( c , f ) of the avalanche event , respectively .sketch of the frames used for deriving the motion equations of the local sandpile , indicated with different colors .spc ( black ) is an inertial frame with the origin and axes set by ` pkdgrav ` .mct ( blue ) is a noninertial translating frame with the origin at the mass center of the asteroid and , , -axes parallel to spc .bdy ( green ) is a noninertial frame that is fixed to the asteroid and initially coincides with mct .loc ( red ) is a frame fixed to the asteroid surface , taking the origin to be where the sandpile is located and choosing the -axis to be normal to the surface at that point .diagram of asteroid ( 99942 ) apophis. the tri - axial ellipsoid ( light gray ) denotes the overall shape model , while the arranged particles ( khaki ) denote the rubble - pile model used for numerical simulations .the highlighted particles ( red ) are markers used for determining the attitude of the asteroid in spc . snapshot of the rubble - pile model for apophis with bimodal particles in irregular packing .the tri - axial ellipsoid ( light gray ) denotes the overall shape model , and the arranged particles ( khaki ) show the rubble - pile model used for numerical simulations . the relative change of shape factor as a function of bulk density during the encounter .the solid line ( with triangular points ) shows the results for the original rubble pile with equal - sized particles .the dashed line ( with square points ) shows the results for the bimodal rubble pile with irregular packing .sketch of the environmental force that the sandpile feels in loc .the plane ( gray ) indicates the local tangential plane .the red arrow indicates the environmental force given by eq .( [ e : envirforce ] ) .label indicates its magnitude , and and indicate the slope angle ( elevation ) and deflection angle ( azimuth ) of the effective gravity , respectively .sketch of the fiducial trajectories in different orientations .these trajectories ( blue numbered lines ) are placed in the perpendicular planes ( yellow ) of the apophis model ( gray ellipsoid ) , matching the three axial directions at perigee .the blue arrows show the prograde and retrograde directions .the red arrow denotes the angular velocity .p , p , p indicate the poles along the three axes , , , respectively .maximum values of slope angle change for the fiducial trajectories .the stars denote the maximum values , and the dashed lines denote the levels of tidal perturbation ( the average of the 4 values in each case ) , which divide the trajectories into basic catagories according to the spin orientation at perigee .see fig .[ f : traject ] for the trajectory orientations .global distribution of slope angle change at perigee for the three trajectory sets a , b and c , each including 4 trajectories of the same spin orientation at perigee ( see text for an explanation of each set ) .a uniform colormap ranging up to is used for the three plots .time variation of the effective gravity slope angle at the poles p ( dashed lines ) , p ( solid lines ) , and p ( dotted lines ) during the encounter .each subgraph corresponds to one fiducial trajectory marked with its number on the upper left ( refer to fig .[ f : traject ] ) .snapshots of sandpiles after the tide - induced avalanches in 2029 , with corresponding time variations of their total kinetic energy .the three panels include the results of sandpiles constructed using the three materials ( a ) gravel , " ( b ) glass beads , " and ( c ) smooth , " respectively .particles that moved more than of a particle radius during the avalanches are highlighted .the peaks in the total kinetic energy curves indicate occurrence of avalanches ; the peaks are labeled , and corresponding particles that moved appreciably are marked in different colors ( red , blue , orange ) in the snapshots .the dotted line in each graph shows the evolution of the planet - asteroid distance during the encounter .snapshots of the sample sandpiles before and after the avalanches induced by the encounter at perigee , with corresponding time variations of their total kinetic energy .the white lines in the snapshots indicate the original and final shapes of the sandpiles .the peaks in the total kinetic energy curves indicate occurrence of avalanches , and the dotted line shows the variation of the planet - asteroid distance during the encounter . | asteroid ( 99942 ) apophis close approach in 2029 will be one of the most significant small - body encounter events in the near future and offers a good opportunity for exploration to determine the asteroid s surface properties and measure any tidal effects that might alter its regolith configuration . resurfacing mechanics has become a new focus for asteroid researchers due to its important implications for interpreting surface observations , including space weathering effects . this paper provides a prediction for the tidal effects during the 2029 encounter , with an emphasis on whether surface refreshing due to regolith movement will occur . the potential shape modification of the object due to the tidal encounter is first confirmed to be negligibly small with systematic simulations , thus only the external perturbations are taken into account for this work ( despite this , seismic shaking induced by shifting blocks might still play a weak role and we will look into this mechanism in future work ) . a two - stage approach is developed to model the responses of asteroid surface particles ( the regolith ) based on the soft - sphere implementation of the parallel -body gravity tree code ` pkdgrav ` . a full - body model of apophis is sent past the earth on the predicted trajectory to generate the data of all forces acting at a target point on the surface . a sandpile constructed in the local frame is then used to approximate the regolith materials ; all the forces the sandpile feels during the encounter are imposed as external perturbations to mimic the regolith s behavior in the full scenario . the local mechanical environment on the asteroid surface is represented in detail , leading to an estimation of the change in global surface environment due to the encounter . typical patterns of perturbation are presented that depend on the asteroid orientation and sense of rotation at perigee . we find that catastrophic avalanches of regolith materials may not occur during the 2029 encounter due to the small level of tidal perturbation , although slight landslides might still be triggered in positions where a sandpile s structure is weak . simulations are performed at different locations on apophis surface and with different body- and spin - axis orientations ; the results show that the small - scale avalanches are widely distributed and manifest independently of the asteroid orientation and the sandpile location . we also include simulation results of much closer encounters of the apophis with earth than what is predicted to occur in 2029 , showing that much more drastic resurfacing takes place in these cases . * keywords * : asteroids , dynamics ; asteroids , surfaces ; near - earth objects ; regoliths ; tides , solid body |
bell s theorem shows that the correlations exhibited by quantum mechanical systems go beyond what is achievable by any local realistic theory ( i. e. any local hidden variable model ) .this result has deep implications and still spurs several subfields of research . on the one hand , from a foundational perspective, it has been questioned what physical principle in nature could then be responsible for giving rise to precisely the particular set of correlations allowed by quantum mechanics . since it has been shown that this set is strictly smaller than what could be achieved by just imposing the no - signaling principle ( i. e. the impossibility of instantaneous propagation of information ) , several works have considered more restrictive physically - motivated axioms .although these approaches rule out several subsets of no - signaling correlations , no definite answer has yet been found as there still exist supra - quantum correlations compatible with these principles . on the other hand , from a more practical point of view and in the context of quantum information theory , it has been realized that quantum nonlocality can be regarded as a resource for device - independent quantum information processing ( diqip ) .the tasks that can be carried out in this way include key distribution for cryptography , randomness generation and dimensionality certification ( see e. g. ) . in order to understand which physical principles constrain the set of quantum correlations and to elucidate what are the ultimate limitations behind diqip protocols , a fundamental question arises :can an efficient mathematical description of this set be found ?it turns out that this is a highly nontrivial question .so far , the only systematic and general form to bound this set is given by the npa hierarchy .this provides a hierarchy of semi - definite programs that approximate the quantum set from the outside .although this constitutes an extremely powerful tool , it has been mainly exploited numerically .thus , many properties of the quantum set remain unknown , since verifying general statements can not be usually approached by numerical means .it would be therefore desirable to have simple analytical conditions constraining the set of quantum correlations .these could be moreover used to exclude some general subsets of no - signaling correlations from the quantum set based on their analytical properties or to bound the maximal efficiency diqip tasks with a given structure can attain . the aim of this article is to fill this gap : i provide simple general analytical conditions any quantum behaviour should satisfy which rely on standard matrix analysis .although these conditions emerge from the first step of the npa hierarchy and are , therefore , only necessary , i will consider examples showing their strength and non - triviality .furthermore , i will provide several different applications of this result : a proof of the separation between the quantum set and extremal nonlocal no - signaling correlations ( a question recently raised in ) , a systematic way to obtain quantum bounds on arbitrary bell inequalities ( generalizing the recent result ) and the possibility to do robust self - testing of bipartite maximally entangled states by building bell inequalities that are maximally violated by them .we will consider the standard bipartite bell scenario in which two parties and that could have interacted in the past remain now uncommunicated .party ( ) can freely choose questions from a finite alphabet ( ) .given any of these inputs and , each party can obtain an outcome and , where the output alphabets are also finite .the central object here , referred to as behaviour , is the joint conditional probability distribution of obtaining the outputs given the choice of inputs , ( for which we will use the shorthand ) .this list of numbers must fulfill and .moreover , since communication among the parties is not possible during the choice of input and recording of the output , the marginal of each party must be independent of the other s action , and .the set of all behaviours satisfying these no - signalling conditions will be denoted by .particular elements of this set are the different local deterministic behaviours ( ldbs ) in which for every party a unique output occurs with probability 1 for every choice of input .the convex hull of these behaviours gives rise to the local set . on the other hand ,the quantum set is given by all behaviours that can be obtained by performing measurements on bipartite quantum states of unrestricted dimension , i. e. for some projectors such that and equal the identity in each party s hilbert space .the crucial observation mentioned in the introduction is that .although is a convex set , it is in general very hard to decide from the definition whether a given behaviour is in or not . on the contrary , and both convex polytopes with vertices given by the ldbs in the first case , to which we have to add some non - local vertices in the second case . following the standard notation , we will refer to these extremal non - local behaviours as popescu - rohrlich ( pr ) boxes . in order to present our results, we will use some further notation .we will arrange every to form the real matrix where in the standard notation of quantum mechanics and denotes the computational basis of and similarly for the other alphabet elements .thus , can be partitioned as a block matrix with blocks normalization imposes that the entries in each block add up to one while no - signaling that the sum of the elements in the same row ( column ) for blocks in the same row ( column ) is equal .we will furthermore consider different matrix norms .following the schatten p - norm notation , will be the trace norm ( i. e. the sum of all singular values ) while the spectral norm ( i. e. the maximal singular value ) .we are now in the position to state our main result . in every scenario , if then .we will actually prove the following stronger result : for every and it must hold that the relevance of this result will be discussed later on . for the moment , let us simply point out that theorem 1 follows from it by noticing that where the maximization is over all isometries . in order to prove that inequality ( [ ineq1 ] ) is true , we will show that the inequality holds , i. e. the first step of the npa hierarchy ( notice that ) .it is worth mentioning that theorem 1 can be proved without invoking this .however , we have chosen to present this proof because it furthermore clarifies the relative strength of the condition .behaviours in must fulfill that a real positive semidefinite matrix exists with and , where ( ) is a vector of ( ) with entries given by ( ) with ( ) in lexicographical order .and have to be fixed to zero .see for details .] thus , defining , for any given the maximum value of attainable in can not be larger than where ( ) and ( ) a ( ) matrixwhose only nonzero entry is the ( ) with value 1 .this is the primal form of a semi - definite program ( sdp ) with cost function and we will denote its solution by .the dual form of this sdp corresponds to where is a real vector yielding the value . by duality , for any feasible ( i. e. satisfying the constraint above ), it must hold that .thus , to finish the proof it suffices to construct a feasible yielding the value .this is the case for where is the -dimensional vector with all entries equal to one . to see that it is feasible amounts to checking that this is indeed true because , since the upper left corner in strictly positive definite , by schur s complement condition this is equivalent to which is obviously true given that the maximal eigenvalue of is precisely .one of the appealing properties of theorem 1 and inequality ( [ ineq1 ] ) is that they have a very compact form .however , the reasoning used in this proof can be applied to obtain other stronger but more complicated conditions .let us denote by the matrix constructed using the same prescription as in eq .( [ pmatrix ] ) but with entries given by we will now prove that for any choice of matrix , should hold and , hence in every scenario , if then this proof goes along similar lines as the previous one , so we will only sketch the details .the set is actually equivalent to the positive semidefiniteness of , that is , now and .thus , the value of in can not be larger than the primal sdp with dual where the vectors have entries and .the same choice of as in the previous proof does the job .let us finish this section analyzing the strength of theorems 1 and 2 by considering a few examples .as mentioned before , our conditions are deduced from the definition of the first step of the npa hierarchy , .obviously then , the best we can expect from them is to separate this set from its complement in .therefore , since , it is clear that there exist supra - quantum behaviours satisfying the conditions of theorems 1 and 2 ( i. e. they are necessary for a behaviour to be in but not sufficient ) .we have performed numerical explorations in the ( 2222 ) scenario that show that the gap between and the behaviours satisfying our conditions is small .theorem 2 only provides a slightly better approximation of than theorem 1 . a more detailed examplecan be found in figure 1 . to my knowledge , the only previous instance of analytical means to constrain is given by the results of ( which emerge from as well ) . however , the application of these techniques is not completely straightforward as they rely on some choice of data processing .still , the inequality emerging from this approach reproduces in the ( 2222 ) scenario .notwithstanding , figure 1 shows that our conditions give a much tighter restriction already in the ( 3322 ) example considered in .this is also apparent in the ( 2233 ) case , where the results presented in fail to completely reproduce for the isotropic behaviours obtained by mixtures of a fully random box and the pr box ( see eq .( [ pr2d ] ) below ) . on the contrary ,theorems 1 and 2 are tight in this case .thus , it is interesting to point out that while inequalities ( [ ineq1 ] ) and ( [ ineq2 ] ) depend on a proper choice of for each case , these examples indicate that theorems 1 and 2 , which are straightforward to apply in general , still provide remarkably strong conditions . to show further the usefulness of these results , in the remaining sections we provide several applications of them . , where and are two pr boxes and is the fully mixed behaviour .horizontal lines achieve the same value for the bell inequality ( see for details ) .crosses : the boundary of and ( equal in this slice , ) . dashed line : the boundary of ( ) . solid line : the conditions of theorems 1 and 2 ( they are equivalent in this case , ) .dashed - dotted line : the inequality of with the sign binning data processing ( ) .a more clever data processing for this case only manages to lower this condition to . ]the previous examples already give a good idea of the strength of the conditions derived here . to analyze in full generality their non - triviality ,notice that , due to the convexity of the trace norm , must attain its maximum value in at the vertices of the polytope .hence , the ideal situation would be that the ldbs achieve the maximal possible value ( ) and that all pr boxes violate this bound .actually , it has been only recently shown in ( see also ) that all pr boxes ( including the multipartite case ) are not in .it was left as an open question there whether there exists a separation between them and .the interest of this question relies on the fact that pr boxes are the most advantageous resources in diqip applications such as cryptography .thus , the closer can be to these behaviours the more efficient these applications can be . in the followingwe show that all ldbs attain the bound and that all pr boxes in and scenarios violate the bound .thus , besides showing that the bound is not improvable and in general not trivial , we further provide a simple proof in these cases of the result of .moreover , we show that there actually exists a quantitative separation between these pr boxes and , answering in these scenarios the question raised therein . in every scenario , if then , with equality for ldbs .every ldb is of the form where the lists ( ) have ( ) entries equal to 1 and 0 otherwise .hence , notice that the fact that holds follows then by the convexity of the trace norm without the need of invoking theorem 1 . evidently , that the inequality is fulfilled in was already obvious from theorem 1 .the important observation here is that all ldbs attain the bound , hence showing that it can not be improved .we analyze now the values might take for pr boxes .let us consider first the scenario .taking to be the circulant matrix the pr boxes in this case are given by up to relabelings of the inputs and the outputs .since these transformations amount to certain permutations of the rows or columns of that leave the trace norm invariant , it suffices to compute it for the matrix given in eq .( [ pr2d ] ) . using the pinching inequality , we obtain that conditions under which equality is attained in the pinching inequality are given in theorem 8.7 of and it is easily checked that they are not met in this case .hence , we obtain that , which amounts to the non - quantumness of these pr boxes by theorem 1. moreover , by a more refined use of the pinching inequality , we obtain the following stronger result , which shows the existence of a finite gap between these pr boxes and . in every scenario , it holds that .we need to show that . by permutation matrices , that leave the trace norm invariant , we can map to , which has now blocks given by we therefore have that where hence, using now the pinching inequality we obtain the desired result .it might be interesting to note that . to see this ,notice that for the frobenius norm and that for any matrix .numerics suggest that the above estimates can be improved to , that would change the bound in theorem 4 to .let us move now to the scenario .the corresponding pr boxes have all been determined in ( see table ii therein ) .we denote an arbitrary one of them by .one sees that ( up to relabelings ) these matrices always have the following structure : they have a block in the diagonal given by followed by blocks in the diagonal , which are either , or .since the latter blocks all have unit trace norm , it follows again by the pinching inequality that .this shows again by virtue of theorem 1 that there is a quantitative separation between the pr boxes in these scenarios and : in every scenario , it holds that .it might be interesting to mention as well that in the scenario for the fully nondeterministic boxes it holds that .for these behaviours there are non - vanishing entries with value .hence , similarly as before , , obtaining the above estimate .one can see that the left - hand - side of inequality ( [ ineq1 ] ) defines an arbitrary bell expression , i. e. any linear combination of the elements . since , and are compact convex sets , there always exist such expressions separating them , i. e. depending on whether with .the most characteristic one is the chsh inequality ( see below ) in the scenario for which , and . while to determine the optimal value of and it suffices to check over the corresponding vertices , to determine the optimal value of , known as tsirelson bounds , is a less straightforward task . and can be computationally hard as the number of vertices increases exponentially with the number of inputs .] however , this is very relevant to identify optimal diqip performances in the context of quantum games .thus , a remarkable feature of inequality ( [ ineq1 ] ) is that it provides a systematic way to construct quantum upper bounds to arbitrary bell inequalities .actually , our result resembles that of , but the latter only holds for the particular class of bell inequalities based on correlators .later on , we will discuss this relation further . to give a hint of the usefulness of inequality ( [ ineq1 ] ), we will show now that it allows to obtain tsirelson s bound for the chsh inequality .this can be expressed by a matrix with blocks it turns out that and , hence , we obtain the trivial bound 4 .nevertheless , given that behaviours in must fulfill several different constraints , equivalent bell inequalities can be expressed up to rescaling and addition of an offset .thus , if we take ( is a matrix with all entries equal to 1 ) it holds that . since , we have then that , and , hence , .since the chsh tsirelson bound is achievable by a quantum behaviour arising from certain measurements on a maximally entangled two - qubit state and is orthogonal , this also shows that for this behaviour , achieving the bound of theorem 1 .it is a natural question to ask which other behaviours in can attain it .we computed for the quantum behaviours yielding the largest known value for several two - outcome bell inequalities given in but , in general , the bound of theorem 1 is not attained . interestingly , when this occurs , the behaviour arises from a maximally entangled state of qubits .this seems to extend for scenarios with more outcomes . in particular , in ( 2233 ) the quantum behaviour maximally violating the cglmp inequality was given in .however , for it we find that , while the maximal value is attained for the behaviour that yields the maximal cglmp value when restricted to a maximally entangled two - qutrit state .this leads to consider whether for every bipartite maximally entangled state of dimension , latexmath:[\[\label{maxent } exists measurements such that the corresponding behaviour attains the bound of theorem 1 . in the following we show that this is indeed the case . we will use the construction of that provides measurements on a scenario ( i. e. and ) such that acting on lead to the behaviour \right)^{-1},\ ] ] where , , and .this construction is enough to show that for a maximally entangled state of any dimension , there exist a behaviour for which the bound of theorem 1 can be saturated .however , it should be stressed that other behaviours arising from maximally entangled states can have this property as well . for anymaximally entangled state , the corresponding behaviour given in eq .( [ maxentbeh ] ) attains the bound of theorem 1 : . in order to obtain will compute its singular values , which we will denote by for reasons that will become clear later .to that aim we will first show that is normal ( i. e. ) implying that the singular values correspond to the absolute values of the eigenvalues . by fixing the inputs , the matrix of the behaviour ( [ maxentbeh ] )can be partitioned as where \right)^{-1},\nonumber\\ p(ab|12)&=\left(2d^3\sin^2\left[\pi(a - b-1/4)/d\right]\right)^{-1},\nonumber\\ p(ab|12)&=\left(2d^3\sin^2\left[\pi(a - b+3/4)/d\right]\right)^{-1}.\end{aligned}\ ] ] thus , to ease the notation we will rewrite eq .( [ pd ] ) as notice that the matrices are circulant , as for every fixed it holds that where it should be understood here that .we will use the following properties of circulant matrices : they all have the same eigenvectors corresponding to eigenvalues where are the -th roots of unity .this particularly implies that all circulant matrices are normal and commute with each other .hence , it is easy to check that is normal if . to see that this is indeed the case , consider that eq .( [ eign ] ) tells us that the eigenvalues of our matrices are given by these summation formulas are computed in the appendix , obtaining notice then that corresponding to the same eigenvector , which in addition to the fact that these matrices are normal , implies indeed that . hence , is normal and , therefore , its singular values are the absolute value of its eigenvalues .we compute now then the latter .since all circulant matrices are diagonalized by the same unitary ( i. e. with diagonal for every circulant matrix ) , it holds then that the matrix has the same eigenvalues as .using now the schur complement condition that tells us that we have that and , hence , thus , using eqs .( [ eigs ] ) , we obtain that which means that hence , we finally obtain that this result is interesting because it shows that the bound of theorem 1 is attainable in and that behaviours arising from maximally entangled states are extremal in this sense .notwithstanding , it has some further application .if a real square matrix has singular value decomposition given by then with orthogonal ( as so are and ) .thus , by choosing , we can always construct a bell expression such that for any given it holds that .remarkably , if we happen to have a quantum behaviour such that ( i. e. it saturates the bound of theorem 1 ) , then the aforementioned prescription immediately yields a bell expression which is maximized in by .this is because and , hence , by inequality ( [ ineq1 ] ) , there can not exist any other quantum behaviour such that .thus , this allows to construct bell inequalities which are maximally violated in by different behaviours of interest .theorem 6 shows that this is possible for maximally entangled states of any dimension .for example , following this prescription for the behaviour ( [ maxentbeh ] ) with leads to a bell expression which is then maximized in by and is given by the following coefficient matrix .hence , and are not orthogonal ( they are not square ) and nor is . still , it holds that as . ] notice that this bell inequality separates from as it is straightforward to find that the maximal value of under is .a different example of a bell inequality maximally violated by can be found in .thus , theorem 6 also shows that that for a maximally entangled state of any dimension there always exists a bell inequality that is maximally violated in by it and how to construct it . in this sense , one can then devise a diqip protocol for which maximally entangled states are optimal within .this might also be of relevance in the context of self - testing if it turned out that the behaviour ( [ maxentbeh ] ) is the only one maximizing these bell expressions in .self - testing arises when a certain behaviour is the unique to attain a particular bell value .this allows to check the performance of a quantum set - up without trusting any of the devices , particularly when it can be made robust . using the techniques of with the bell inequality and its generalizations for other dimensions , it could be possible to check whether robust self - testing of maximally entangled states is possible in this way .notice , moreover , that this observation above that allows to construct a bell expression such that can be used in other contexts .for instance , the results of theorems 4 and 5 imply that for every pr box considered there we can write down constructively a bell expression , and hence a potential diqip protocol , whose performance has a quantitative gap with any quantum behaviour .if we restrict ourselves ourselves to two - outcome scenarios ( ) and taking , all behaviours in can be alternatively characterized by the correlators and the marginal expectations as mentioned before , it has been shown in that for the particular class of correlator bell inequalities , must hold and every real matrix . we will denote by the matrix with entries .it can be shown that the result above can also be proved using similar techniques as in theorems 1 and 2 .for that , one just needs to consider that behaviours in must fulfill that a real positive semidefinite matrix exists with and and proceed as in the proofs of theorems 1 and 2 to upper bound .this does not only provide an alternative proof of inequality ( [ eqepping ] ) but it also shows that this bound can not give stronger constraints than .moreover , as in theorems 1 and 2 this leads to the following condition latexmath:[\[\label{ineqc1 } that this condition is strictly weaker than theorem 1 ( i. e. every behaviour detected as non - quantum by the above inequality is also non - quantum by theorem 1 ) as we show in the following .for every two - outcome behaviour , if , then .using again the mapping from to as in the proof of theorem 4 , we obtain that \nonumber\\ & = \frac{\sqrt{m_am_b}+||c||_1}{2},\end{aligned}\ ] ] where the inequality comes from corollary 3 in .the above result agrees with intuition since with we are not fully characterizing .this suggests to consider on the analogy of theorem 2 a condition including the marginals and .indeed , using similar arguments based on the first step of the npa hierarchy one can show that , for every matrix , it must hold for every quantum behaviour that this particularly implies the following .for every two - outcome behaviour it holds that where has entries .we have to show that inequality ( [ ineqc2 ] ) is true for quantum behaviours .this follows from the fact that is equivalent to the positive semi - definiteness of where ( ) is an ()-dimensional vector with entries ( ) . by schurs complement condition this leads to with and .proceeding as in the proof of theorems 1 and 2 we can upper bound to obtain the desired result .we have shown that the first step of the npa hierarchy allows to obtain simple analytical conditions constraining the set of quantum behaviours in general bipartite bell scenarios , whose strength and non - triviality have been illustrated .since not all problems in quantum nonlocality and diqip can be addressed numerically , we expect these conditions to be of utility , filling the hitherto lack of such general tools .in fact , we have applied these conditions here to obtain a variety of results . in sec .[ nontsec ] we have shown that our bounds are tight and , in general , non - trivial , and we have used them to prove that there exists a finite gap ( whose size we have estimated ) between the quantum set and pr boxes in several general scenarios answering a question raised in . in sec .[ tsisec ] we have provided a systematic construction of quantum bounds for arbitrary bell inequalities and we have shown that for a maximally entangled state of any dimension one can obtain a behaviour that attains the bound of theorem 1 .interestingly , this can be translated into a bell inequality whose tsirelson bound is reached by such a state .this could be applied for robust self - testing of maximally entangled states using the techniques developed in .finally , in sec . [ corrsec ] we studied the particular case of correlation scenarios and established some links with the results of .several other ideas will be further investigated in the future . given a bell inequality ,it would be interesting to find a procedure to find the best form of in ( [ ineq1 ] ) to obtain its quantum upper bound and when it can be optimal .it is also worth further research to characterize which behaviours in attain the bound in theorem 1 : do they only arise from maximally entangled states ?it would be also desirable to extend this approach to the multipartite setting .last , it is worth studying whether stronger analytical conditions as those derived here can be obtained by considering further steps of the npa hierarchy .i thank a. acin and his collaborators for sharing with me a draft of their work on bell inequalities maximally violated by maximally entangled states .this research was funded by the spanish mineco through grants mtm 2010 - 21186-c02 - 02 , mtm2011 - 26912 and mtm2014 - 54692 and the cam regional research consortium quitemad+cm s2013/ice-2801 .here we prove the summation formulas used in the proof of theorem 6 : which follow from to verify eq .( [ eqapp ] ) , first notice that the geometric sum yields and , therefore , thus , we find that notice that the inner sum is equal to zero , unless for any integer for which the sum is equal to .given the values the indices take , this can only happen for or , hence obtaining that leads to the desired result .masanes , s. pironio , and a. acin , nature comm . * 2 * , 238 ( 2011 ) ; e. hnggi and r. renner , arxiv:1009.1833 ( 2010 ) ; s. pironio , ll .masanes , a. leverrier , and a. acin , phys .x * 3 * , 031007 ( 2013 ) ; u. vazirani and t. vidick , phys .lett . * 113 * , 140501 ( 2014 ) .s. pironio _et al . _ ,nature * 464 * , 1021 ( 2010 ) ; r. colbeck and r. renner , nature phys .* 8 * , 450 ( 2012 ) ; r. gallego , ll .masanes , g. de la torre , c. dhara , l. aolita , and a. acin , nature comm . * 4 * , 2654 ( 2013 ) . after the publication of the first version of this preprint and before devising the proof of theorem 6 , a. acin informed me that his group is finishing a work on bell inequalities maximally violated by maximally entangled states using different techniques ( private communication ) .m. mckague , t. h. yang , and v. scarani , j. phys .* 45 * 455304 ( 2012 ) ; c. a. miller and y. shi , arxiv:1207.1819 ( 2012 ) ; t. h. yang and m. navascus , phys . rev .a * 87 * , 050102(r ) ( 2013 ) ; b. w. reichardt , f. unger , and u. vazirani , nature * 496 * , 456 ( 2013 ) . | the characterization of the set of quantum correlations in bell scenarios is a problem of paramount importance for both the foundations of quantum mechanics and quantum information processing in the device - independent scenario . however , a clear - cut ( physical or mathematical ) characterization of this set remains elusive and many of its properties are still unknown . we provide here simple and general analytical conditions that are necessary for an arbitrary bipartite behaviour to be quantum . although the conditions are not sufficient , we illustrate the strength and non - triviality of these conditions with a few examples . moreover , we provide several applications of this result : we prove a quantitative separation of the quantum set from extremal nonlocal no - signaling behaviours in several general scenarios , we provide a relation to obtain tsirelson bounds for arbitrary bell inequalities and a construction of bell expressions whose maximal quantum value is attained by a maximally entangled state of any given dimension . |
the profile of hii region around an isolated source of uv photons is an old topic in astrophysics .a classical result was given by strmgren ( 1939 ) , who showed that the profile of the spherical hii region of a point source embedded in a uniformly distributed hydrogen gas with number density at radial coordinate can be approximately described by a step function as = \left \ { \begin{array}{ll } 1 , & { \rm if \ } r < r_s \\ 0 , & { \rm if \ } r > r_s \\ \end{array } \right .\ ] ] where the fraction of ionized hydrogen , being the number density of ionized hydrogen . that is , hydrogen gas is sharply divided into two regions : within a sphere with strmgren radius around the source , hydrogen is fully ionized , while outside the sphere hydrogen atoms remain neutral . is determined by the balance between ionization and recombination where is the emission of ionizing photons of the source , and is the recombination coefficient of hii ( osterbrock & ferland 2005 ) .the sharp boundary at is the ionization front ( i - front ) separating the hii and hi regions .the problem of the strmgren sphere has once again attracted many studies recently , because the formation of hii regions around high redshift quasars , galaxies and first stars is crucial to understanding the evolution of the reionization ( cen & haiman 2000 ; madau & rees 2000 ; ciardi et al .2001 ; ricotti et al . 2002 ; wyithe & loeb , 2004 ; kitayama , et al .2004 ; whalen et al . 2004; yu , 2005 ; yu & lu 2005 ; alvarez et al . 2006 , iliev et al .2006 ) . unlike the static solution eq.([eq1 ] ) , new studies focus on the dynamical behavior of the ionized sphere , such as the time - dependence of the ionization profile , the propagation of the i - front . besides the hii region and the i - front, a high kinetic temperature region also exists around the uv photon source due to the photon - heating of gas .similar to the i - front , there is also a t - front separating heated and un - heated gas .the temperature profile and the t - front are important for probing reionization .for instance , the region with high kinetic temperature and low would be the 21 cm emission region associated with sources at the reionization epoch ( tozzi et al .2000 ; wyithe et al .2005 ; cen , 2006 ; chen & miralda - escude 2006 ) .although the possible existence of a 21 cm emission shell around high redshift quasars and first stars has been addressed qualitatively or semi - analytically in these references , a serious calculation seems to be still lacking .many numerical solvers for the radiative transfer equation have been proposed ( ciardi et al . 2001 , gnedin & abel 2001 , sokasian et al .2001 , nakamoto et al .2001 ; razoumov et al .2002 , cen 2002 , maselli et al .2003 , shapiro et al . , 2004 ;rijkhorst et al .2005 ; mellema et al .2006 ; susa 2006 , whalen & norman 2006 ) .these solvers provide numerical results of the i - front .however , the results are still diverse due to the usage of different approximations .some of the results show that the time scale of the i - front evolution is sensitively dependent on the intensity of source ( white et al .2003 ) , while some yield intensity - independent evolution ( e.g. mellema et al .this is because the retardation of photon propagation is ignored in the later , while the recombination is ignored in the former .therefore , it is worth to re - calculate this problem without above - mentioned assumptions .it has been pointed out that to study the dynamical features of the i- and t - fronts , one would need to apply high - resolution shock - capturing schemes similar to those developed in fluid dynamics ( razoumov & scott 1999 ) .the finite difference weno scheme is an algorithm satisfying this requirement .moreover , although the fraction of the remained neutral hydrogen within the ionized sphere is extremely small , it is not zero .the small fraction is important to estimate the ly- photon leaking at high redshifts .therefore , a proper algorithm for the dynamical properties of the ionized sphere should be able to , on the one hand , effectively capture sharp profile of ionization and temperature around the i- and t - fronts , and , on the other hand , give a precise value of the remained neutral hydrogen between the discontinuities .this can also be satisfied by the weno algorithm .the weno algorithm has proved to have high order of accuracy and good convergence in capturing discontinuities and complicated structures in fluid as well as to be significantly superior over piecewise smooth solutions containing discontinuities ( shu 2003 ) .we have shown that the weno algorithm is indeed effective for solving radiative transfer problem with discontinuities with high accuracy .for instance , it can follow the propagation of a sharp i - front and the step function cut - off of retardation ( qiu et al .we now develop this method to solve both ionization and temperature profiles . it is not a trivial generalization . because the rate equations of heating - cooling and ionization - recombination are stiff , the time integration of the weno scheme can not be directly implemented .a proper strategy to save computational cost on time integration will be developed .a long - term motivation of this work , as we mentioned in ( qiu et al .2006 ) , is to develop a weno solver of hydrodynamic / radiative transfer problems , similar to the development of a hybrid algorithm of hydrodynamic / n - body simulation based on weno scheme ( feng et al .2004 ) .the paper is organized as follows .section 2 describes the problem and equations needed to be solved .section 3 presents the weno numerical scheme .section 4 gives the solutions of the temperature and ionization profiles and the evolution of energy spectrum of photons .a discussion and conclusion are given in section 5 .details of the atomic processes are listed in the appendix .to demonstrate the algorithm , we consider the ionization of a uniformly distributed hydrogen gas in space with number density by a point uv photon source located at the center . adding helium component in the gas is straightforward and will not change the algorithm for radiative transfer .if the time scale of the growth of the ionized sphere is less than , being the hubble parameter , the expansion of the universe can be ignored .the radiative transfer equation of the specific intensity is then ( see the appendix ) where is the frequency of photon .the source term , , is given by where is the energy distribution of photons emitted by the central source per unit time within the frequency range from to .we assume the energy spectrum of uv photons to be of a power law , and is the ionization energy of the ground state of hydrogen ev .integration of over gives the total intensity ( energy per unit time ) of ionizing photons emitted by the source , .the absorption coefficient of eq.([eq3 ] ) is , where the cross section and .the evolution of the number density of neutral hydrogen hi , , is governed by the ionization equation , where is the fraction of neutral hydrogen , and is the number density of electrons . obviously , . in eq.([eq5 ] ), is the recombination coefficient and is the collision ionization rate .the photoionization rate is given by the kinetic temperature of baryon gas is determined by the equation where erg is the boltzmann constant and the temperature is in unit of k. the details of the heating and cooling are given in the appendix .in the numerical implementation , it is convenient to introduce the dimensionless variables of time , space and frequency defined by , and . is the optical depth of ionizing photon in neutral hydrogen gas with density .therefore , and are respectively , the time and distance in units of mean free flight time and mean free path of ionizing photon in the non - ionized background hydrogen gas . for the model , , where is redshift , myrs and mpc .correspondingly , the intensity is rescaled by .thus , eqs.([eq3 ] ) , ( [ eq5 ] ) , ( [ eq6 ] ) and ( [ eq7 ] ) become and where we have assumed and the point source condition eq.([eq4 ] ) requires .if we take , the total intensity of the source is given by in our numerical calculation , we solve the system of equations ( [ eq8 ] ) , ( [ eq9 ] ) and ( [ eq11 ] ) for the specific intensity , the fraction of the neutral hydrogen and the temperature as functions of the radius , frequency and time .we will drop the prime in the variables , , and in this section below , when there is no ambiguity , and keep prime in the variables in showing the numerical results . to solve the radiative transfer equation, we adopt the fifth - order finite difference weno scheme with anti - diffusive flux corrections .the fifth - order finite difference weno scheme was designed in ( jiang shu 1996 ) and the anti - diffusive flux corrections to the high order finite difference scheme was designed in ( xu shu 2005 ) .the objective of the anti - diffusive flux corrections is to sharpen the contact discontinuities in the numerical solution of the weno scheme as well as to maintain high order accuracy .a fourth order quadrature formula is used in the computation of integration in equations ( [ eq10 ] ) and ( [ eq1112 ] ) .the third order tvd runge - kutta time discretization is used in time integration for the system of equations ( [ eq8 ] ) , ( [ eq9 ] ) and ( [ eq11 ] ) .we now describe our numerical algorithm in more detail .the computational domain is \times [ 1 , \nu_{max}] ] , where is time - independent , and is only a function of .that is , all the time - dependence of can be described by the i - front . in the region , the ionization profile is time - independent .it can be given by a static or stationary solution of the radiative transfer equation .this property has been applied in some codes for radiative transfer , in which the ionization field is given by the static solution of the radiative transfer equation for a given matter distribution .for the temperature profile , one might also approximately define a t - front function by a step function as ] , where is time - independent .that is , the -dependence of the function can not be separated with .especially , the function in the range between the i- and t - fronts actually always depends strongly on .figure [ fig3 ] shows that , for a very strong source erg s , we have , i.e. the ionizing front moves with a speed close to the speed of light . for a weak source erg s, the profile does not show a significant expansion of the i - front if time is larger than a few of free flight times of ionizing photons .this result is also evident in figure [ fig4 ] on versus . is defined by .it denotes the size , within which , i.e. , 90% hydrogen are ionized .therefore , it can be used for the i - front . for a strong source , such as ergs s , we have approximately , which implies that the ionizing region grows with an ionizing front propagating with almost the speed of light . for weak sources, are also following at very small , but become when is large .the weaker the sources , the earlier the stage of takes place .this point is more clearly shown in the right panel of figure [ fig4 ] .we see that the speed is close to when is small , and then the speed decreases with by a power law with .one can define a time , larger than which , the speed starts to decrease with by the power law . at time , the ionizing sphere is still expanding , but very slowly .it will finally approach a solution , of which the ionization equilibrium is approximately established , and the ionized sphere becomes time - independent .the time depends on the source intensity .the stronger the source , the larger the time . , which is the solution of .the light solid line is .right panel : vs. .other parameters and the mesh in the numerical simulation are the same as those in figure [ fig3 ] . the sources are taken to be ( dash dot dot ) , ( dash dot ) , ( dash ) , and ( solid line ) erg s .time is dimensionless ., title="fig:",width=226 ] , which is the solution of .the light solid line is .right panel : vs. .other parameters and the mesh in the numerical simulation are the same as those in figure [ fig3 ] . the sources are taken to be ( dash dot dot ) , ( dash dot ) , ( dash ) , and ( solid line ) erg s .time is dimensionless ., title="fig:",width=226 ] obviously , the speed of the propagation of t - front can not be larger than the speed of light , and therefore , the t - front will approximately coincide with the i - front when . only in this period ,the ionized sphere is the same as the heated sphere . when , the t - front starts to exceed the i - front , and the pre - heating layer is formed .therefore , the formation of pre - heated layer happens later for a stronger source .figure [ fig5 ] gives a comparison of and at time myrs for sources with different intensity .we can see that the pre - heating layer has been well established at time myrs for all sources with erg s , while the t - front of the source with erg s is still about the same as the i - front .one can expect that for a source of erg s , a preheated layer will be formed at time myrs and on comoving distance mpc from the source .( left panel ) and ( right panel ) at time myrs for sources with intensity ( dash dot dot ) , ( dash dot ) , ( dash ) , and ( solid line ) erg s . and are the physical and dimensionless distance , respectively .the power - law frequency spectrum has index .the redshift is taken to be .the mesh in the numerical simulation is the same as that in figure [ fig3 ] ., title="fig:",width=226 ] ( left panel ) and ( right panel ) at time myrs for sources with intensity ( dash dot dot ) , ( dash dot ) , ( dash ) , and ( solid line ) erg s . and are the physical and dimensionless distance , respectively .the power - law frequency spectrum has index .the redshift is taken to be .the mesh in the numerical simulation is the same as that in figure [ fig3 ] ., title="fig:",width=226 ] the formation of the pre - heated layer with high and high is due to the lack of soft photons beyond the i - front , while hard photons are still abundant in that region . in other words , the energy spectrum of the photonsis significantly hardened around the i - front .we now directly demonstrate the evolution of the photon frequency spectrum , as our code can effectively reveal the evolution of radiations in the -space .the left panel of figure [ fig6 ] gives the frequency spectra of 1 . ) source erg s at time myrs and physical distance , 0.11 , and 0.14 mpc , and 2 . )source erg s at time myrs and physical distance and 0.24 mpc .we can see a significant -dependence of the frequency spectrum . at a small the spectra are still almost the same as the original power - law spectrum with , while at large , they significantly deviate from the original power - law .all photons of , are exhausted within mpc . at high frequency ,the spectra are still of power - law with , while at they are substantially dropped .it shows a peak at , and looks like a spectrum with self - absorption .however , unlike a self - absorption spectrum , the position of the peak is not fixed in the frequency - space , it moves to higher frequency with time .namely , the photon energy spectrum is harder at later time .vs. at time myrs . 1 . ) left panel : for the source erg s , at physical distance ( dash dot ) , 0.11 ( dash ) and 0.14 ( solid line ) mpc ; and 2 . ) right panel : for the source erg s at physical distance ( dash ) and 0.24 ( solid line ) mpc .redshift is .the mesh in the numerical simulation is the same as that in figure [ fig3 ] ., title="fig:",width=226 ] vs. at time myrs . 1 . )left panel : for the source erg s , at physical distance ( dash dot ) , 0.11 ( dash ) and 0.14 ( solid line ) mpc ; and 2 . ) right panel : for the source erg s at physical distance ( dash ) and 0.24 ( solid line ) mpc .redshift is .the mesh in the numerical simulation is the same as that in figure [ fig3 ] ., title="fig:",width=226 ] the spectral hardening can be measured by the index of power law defined as figure [ fig7 ] plots vs. for the frequency spectra of figure [ fig6 ] . obviously , in the band , becomes smaller at larger .both figures [ fig6 ] and [ fig7 ] show that the frequency spectra of photons depend strongly on and when and are close to or inside the pre - heating layer . vs. for the source erg s at physical distance ( dash ) and 0.24 ( solid line ) mpc .redshift is .the mesh in the numerical simulation is the same as that in figure [ fig3 ] ., width=226 ]we have described weno scheme which is able to solve the phase - space distribution function of photons in an isolated ionized patch around individual source in the early stage of reionization .this algorithm can produce robust results for the propagation of the i- and t - fronts .it can also give stable results for the time - dependent distribution of the small fraction of neutral hydrogen within the i - front .we have developed the method to deal with the stiffness of rate equations in time integration .consequently , the computational speed is acceptable .this algorithm can be applied to problems with a wide range of intensity of sources , from to 10 erg s with a power law spectrum at redshift . since this algorithm treat the frequency space as well as physical space , it can also be used for sources with other frequency spectrum .we have not considered helium and secondary effects in the atomic processes yet , however the algorithm has no practical difficulty to include these factors .we show that a common feature of the uv photon sources in the reionization epoch is to form a preheating region . in the first stage the i- and t - fronts in baryon gas are coincident , and propagate with the speed about the same as the speed of light .when the frequency spectrum of the uv photons is hardened , the evolution enters into the second stage .the propagation speeds of both the ionizing and heating fronts are less than that of light , but the t - front is always moving faster than the ionizing front . in the spherical shell between the i- and t - fronts , the kinetic temperature of gas can be as large as , while atoms are almost neural . obviously , the shell would be of interest in the search for the 21 cm emission from the reionization epoch .the details of the preheating region are sensitively dependent on the parameters of the problems . for instance, the radius and thickness of the preheated shell do not show a scaling relation on their dependence of the source intensity .therefore , an algorithm , which can properly handle models with various parameters is necessary . using these results, one can make comments on some numerical solvers used for the radiative transfer equations .several solvers are based on the approximation of omitting the time - derivative term of the radiative transfer equation ( nakamoto et al . 2001 ; cen 2002 ; razoumov et al .2002 ; rijkhorst et al .2005 ; susa 2006 ) .they calculate the static ionization field for a given uniform or non - uniform density fields of cosmic baryon matter .these codes essentially are generalizing the static solution eq.([eq1 ] ) to the case of inhomogeneous background distribution of baryon matter , and multiple sources .such an approximation is useful to calculate the ionization field for each given inhomogeneous mass field of baryon matter , but it will no longer be a good approximation when the retardation effect due to photon propagation is important . for the problem of ionization profiles around a point source ,the retardation of photon is not always negligible .for instance , one finds the following parameters have been used in the model : the ionization and heating at comoving distance h mpc from a point source of age yrs at redshift ( e.g. cen , 2006 ) .these parameters already violated the retardation constraint .the retardation effect can be seen from the time - dependence of the i - front , .if the retardation effect is negligible , an analytical solution of is given by e.g. whalen & norman ( 2006 ) ^{1/3},\ ] ] where is the static radius of the strmgren sphere given by eq.(2 ) , and is recombination time scale . according to eq.([shuadd61 ] ) , the time scale of the evolution is , which is independent of the source intensity .that is , only if , regardless . however , figure [ fig4 ] shows that time scale of the evolution is -dependent .this result also holds if we replace by , e.g. , and is true for a monochromatic source too . considering the retardation effect , an analytical solution of approximately given by the following algebraic equation ( white et al .2003 ) ^{1/3}.\ ] ] according to eq.([shuadd62 ] ) , when , the speed of the i - front , , and when , .it is qualitatively consistent with shown in figure [ fig4 ] .first , the time scale of the evolution is approximately , the larger the , the longer the .second , decreases with by a power law when is large .however , the power law index .this is expected , because eq.([shuadd62 ] ) does not include the effect of recombination , which leads to a slower decrease than .a common assumption used in eqs.([shuadd61 ] ) and ( [ shuadd62 ] ) is that the distribution is described by a step function like eq.([eq1 ] ) . obviously , it ignores the neutral hydrogen hi probably remained within .the tiny fraction of hi may also be hardly calculated well by monte - carlo codes ( ciardi et al .2001 ; maselli et al .2003 ) , which yield large numerical errors due to possion shot noise .the weno algorithm has been successfully applied to kinetic equations of the distribution function in the phase space with one or two spatial dimensions and two or three phase space dimensions with acceptable computational speed ( carrillo et al . 2006 ) .we believe that it is not difficult to implement the weno algorithm for radiative transfer problems with similar dimensions in the phase space . in our calculation , the evolution the cosmic baryon gas is not tracked by the hydrodynamic equations , but is simply assumed to have a uniform distribution with density .this treatment would be reasonable if the typical time scale of the relevant hydrodynamic effects are less than that of the i- and t - fronts .for instance , dynamical effects associated with sonic propagation would be negligible in solving the propagation of the i- and t - fronts .of course , one can expect that richer results will be yielded if the weno scheme for radiative transfer problems can be incorporated with the euler hydrodynamics .since the weno scheme for the cosmological hydrodynamical simulation has already been well established , it would be possible to develop a unified radiation / n - body / hydrodynamics code for cosmological problems . * acknowledgments .* this work is supported in part by the us nsf under the grants ast-0506734 and ast-0507340 .llf acknowledges support from the national science foundation of china under the grant 10573036 .for an ionized sphere associated with a point photon source , the radiation transfer ( rt ) equation is ( bernstein , 1988 , qiu et al .2006 ) where is the specific intensity , the cosmic factor , , the frequency of photon and a unit vector in the direction of photon propagation . in eq.([eqa1 ] ) , we take . and are , respectively , the absorption and sources of photons . the absorption coefficient of eq.([eqa1 ] )is given by where the cross section .\5 . cooling .since only the recombination cooling is important , we have ^ 2 \\\nonumber & + & 1.42\times 10^{-27 } t^{1/2}[1-f_{\rm hi}]^2 \\\nonumber & + & 2.45\times 10^{-21}t^{1/2 } e^{-157809.1/t}(1+t_5^{1/2})^{-1 } ( 1-f_{\rm hi})f_{\rm hi } \\\nonumber & + & 7.5\times 10^{-19 } e^{-118348/t}(1+t_5^{1/2})^{-1 } ( 1-f_{\rm hi})f_{\rm hi}\end{aligned}\ ] ] where .the terms on the r.h.s . of eq.([eq16 ] ) are , respectively , the recombination cooling , collisional ionization cooling , collisional excitation cooling and bremsstrahlung .both and are in the unit of ergs s . | we develop a numerical solver for radiative transfer problems based on the weighted essentially nonoscillatory ( weno ) scheme modified with anti - diffusive flux corrections , in order to solve the temperature and ionization profiles around a point source of photons in the reionization epoch . algorithms for such simulation must be able to handle the following two features : 1 . the sharp profiles of ionization and temperature at the ionizing front ( i - front ) and the heating front ( t - front ) , and 2 . the fraction of neutral hydrogen within the ionized sphere is extremely small due to the stiffness of the rate equations of atom processes . the weno scheme can properly handle these two features , as it has been shown to have high order of accuracy and good convergence in capturing discontinuities and complicated structures in fluid as well as to be significantly superior over piecewise smooth solutions containing discontinuities . with this algorithm , we show the time - dependence of the preheated shell around a uv photon source . in the first stage the i - front and t - front are coincident , and propagate with almost the speed of light . in later stage , when the frequency spectrum of uv photons is hardened , the speeds of propagation of the ionizing and heating fronts are both significantly less than the speed of light , and the heating front is always beyond the ionizing front . in the spherical shell between the i- and t - fronts , the igm is heated , while atoms keep almost neutral . the time scale of the preheated shell evolution is dependent on the intensity of the photon source . we also find that the details of the pre - heated shell and the distribution of neutral hydrogen remained in the ionized sphere are actually sensitive to the parameters used . the weno algorithm can provide stable and robust solutions to study these details . , , , cosmology : theory , gravitation , hydrodynamics , methods : numerical , shock waves 95.30.jx , 07.05.tp , 98.80.-k |
the lotka - volterra equation is at the heart of population dynamics , but also possesses a famous economic interpretation. introduced by richard goodwin in 1967 , the model in its modern form reduces to the planar oscillator on a subset of : [ eq : goodwin_model ] \ { ll d x_t & = x_t ((y_t)-)dt + d y_t & = y_t ((x_t ) -)dt . , where denotes the wage share of the working population and the employment rate , and are constant , and the following assumption is made on and .[ ass : non - linear goodwin ] consider system .1 . , , for all , and . , for all , and .lemma [ lem : goodwin system ] below asserts that assumption [ ass : non - linear goodwin ] is sufficient to have for any if .this property preserves the above interpretation for and : the employment rate can not exceed one for obvious reasons , but the wage share can , depending on the chosen economic assumptions , see .this distinctive feature of the economic version on its biological counterpart follows from a construction based on assumptions describing a closed capitalist economy .it can be done in three steps : 1 . assume a leontief production function with full utilization of capital , i.e. , .here , is the yearly output , the invested capital , a capital - to - output ratio , is the average productivity of workers and is the size of the labor class .2 . the capital depreciates and receives investment , i.e. , , where is the depreciation rate and the investment function .goodwin originally invokes say s law , i.e. , .3 . assume a reserve army effect for wage negotiation of the form where represents the real wage of the total working population , and is the phillips curve .defining allows to retrieve for .the class - struggle model has been extensively studied because it allows to generate endogenous real business cycles affecting the production level , e.g. . on this matter ,goodwin himself conceded that the model is `` starkly schematized and hence quite unrealistic '' .it hardly connects with irregular observed trajectories , see .the objective of this paper is thus to study the following perturbed version of by a standard brownian motion on a stochastic basis : [ eq : stochastic_goodwin_1 ] \ { ll d x_t & = x_t (((y_t)-+ ^2(y_t))dt + ( y_t)dw_t ) + d y_t & = y_t (((x_t ) - + ^2(y_t ) ) dt + ( y_t)dw_t ) . , where is a positive function of bounded by , and the filtration is generated by paths of .the form of is discussed in remark [ rem : conditions ] after . a stronger condition , assumption [ass : growth conditions ] , is assumed later on the behavior of to ensure that solutions of remain in .the example of section [ sec : example ] will also illustrate how such condition can hold .we modify the economic development ( i ) , ( ii ) and ( iii ) by introducing the perturbation on one assumption , namely we assume that for , [ eq : stochastic_productivity ] da_t : = a_t d_t = a_t ( dt - ( y_t)dw_t ) , a_00 , instead of . using it formula with in the previous reasoning retrieves .productivity is one of the few exogenous parameters of the model , and one of those that were significantly invoked as influencial over business cycles , e.g. . without arguing for the pertinence of that particular assumption ,we simply suggest here that a standard continuous perturbation in this crucial parameter seems a good starting point . to our knowledge , this is the first attempt to consider random noise in goodwin interpretation of the famous prey - predator model .to stay in the spirit of the economic application , the present paper studies the cyclical behavior of the deterministic system and the stochastic version .namely , our contribution are as follows , developed in the present order : * in section [ sec : goodwin ] , we fully characterize solutions of and the period of their orbits .this generalizes standard results on lotka - volterra systems to bounded domains of existence . * in section[ sec : stochastic ] , we provide existence conditions for regular solutions of .we use the entropy of to estimate the deviation induced by .we provide a definition of stochastic orbits for . the proof that solutions of draw stochastic orbits in finite time around a unique point is given in section [ sec : proof ] .our contribution has to be put in contrast with numerous studies of random perturbations of the lotka - volterra system .apart from the obviously different origin of perturbations in the model , attention was mainly given to systems like for its asymptotic behavior ( e.g. , ) , regularity , persistence and extinction of species ( e.g. , ) , and the addition of regimes , jumps or delay ( e.g. , ) .here , we attempt to provide a relevant description of trajectories and indirectly , namely a cyclical behavior .this is done using stochastic lyapunov techniques for recurrent domains as described in . by conveniently dividing the domain , we obtain that almost every trajectory `` cycles '' around a point in finite time .the -boundedness is out of reach with our method , but numerical simulations are presented in section [ sec : example ] , not only to provide expectation of cycles , but also allow to conjecture a limit cycle phenomenon for the expectation of .it is somewhat unclear how assumption [ ass : non - linear goodwin ] and late assumption [ ass : growth conditions ] on , and , are relevant in these results .we show below thatthey are sufficient to obtain existence of regular solutions to .this actually emphasizes the role played by the entropy of the deterministic system in the well - posedness of the stochastic system and as a natural measure for perturbation .according to assumption [ ass : non - linear goodwin ] , there exists only one non - hyperbolic equilibrium point to in given by . on the boundary of ,there exists also an additional equilibrium which is eluded along the paper .[ def : lyapunov ] let and be three functions defined by and [ lem : goodwin system ] let .let assumption [ ass : non - linear goodwin ] hold .then a solution to starting at at describes closed orbits given by the set of points , and for all .[ proof : goodwin system ] it is well - known that is a lyapunov function and a constant of motion for system : and take non - negative values with , and .additionally , under assumption [ ass : non - linear goodwin].(i)-(ii ) , so that for any , and the solution stays in .the value of characterizing an orbit , it is in bijection with its period . the following generalizes .[ th : periods_goodwin ] let be a solution to satisfying assumption [ ass : non - linear goodwin ] , with .let , and the two solutions to equation .define three functions by then for defined by [ eq : period_goodwin ] t(v_0):=_log()^(|x ) - dz .let , and a solution to starting at .according to lemma [ lem : goodwin system ] , implies .then =:i ] , \x [ \wt{y } , \bar{y}] ] and ] , separation of variables in provides two quantities and : [ eq : period_function_1 ] f(u):=_0^u ds = _^z[(e^s ) - ] ds = : g(z ) .the function verifies , is increasing on ] with so that . coming back to we get implying that ] .we can write where and are the two restrictions of on and respectively . notice that if ] .thus , is a strictly increasing function of taking its values in ] , while minimums are given by .this sums up with )\subset [ 0,v_0] ] almost surely . also , on that set . put in another way , on . according to bayes rule , now introduce the main result of the paper .we provide the following tailor - made definition for the cycling behavior of .[ def : stochastic orbit ] let and .let be a stochastic process starting at staying in almost surely .we then introduce the angle between ^{\top} ] .let be a stopping time ( a stochastic period ) .then , the process is said to orbit stochastically around in if almost surely .[ th : finite period ] let and a solution to starting at . then orbits stochastically around in .more precisely the system produces clockwise orbits inside .the angle is only defined if .this can be ensured by either proving that for all almost surely , or by defining as in definition [ def : stochastic orbit ] .see also remark [ rem : out of point ] the proof of theorem [ th : finite period ] is removed to section [ sec : proof ] .recall that the probability space is given by with the filtration generated only by is completed with null sets . our proof ,although unwieldy , allows us to describe precisely the possible trajectories of solutions of .it consists in defining subregions of the domain , see definition [ def : regions ] below illustrated by fig .[ fig : rotation ] , and prove that the process exits from them in finite time by the appropriate frontier . according to theorem [ prop : existence ] any regular solution of is a markov process .we then repeatedly change the initial condition of the system , as equivalent of a time translation and use definition [ def : exit times ] herefater .we obtain recurrence properties via theorem 3.9 in .since it is repeatedly used hereafter , we provide here a version suited to our context .[ th : reccurent domain ] let be a regular solution of in , starting at , for some .let verifying for all and where and . then leaves the region in finite time almost surely .[ def : theta ] let be defined by as a concave decreasing function . for a solution to , we define the finite variation process verifying .additionally , let .[ def : regions ] we define eight sets such that and , by [ def : exit times ] let be a solution to starting at . for any , we define the stopping times . by .since and , the graph illustrates the general case.,height=264 ] [ rem : out of point ] it seems rather clear that the point is not reached in finite time with a positive probability . in the following , the fact that for some small in a neighborhood of implies that is not a limit to almost every path of a solution to , recall remark [ rem : basic properties ] . to ease the reading of the proof of theorem [ th : finite period ] which follows from the following propositions [ prop : r1_r8 ] to [ prop : r8_r1 ] , we divide it in four quadrants around .we first prove that the process cycles , even in infinite time , for some particular starting points .[ prop : r1_r8 ] if , then . if , then .this is a direct consequence of the absence of brownian motion in .take .then on ] .then for any such that .moreover , theorem [ th : reccurent domain ] stipulates that leaves in finite time almost surely which is only possible via according to proposition [ prop : r1_r8 ] . reaching the boundaryis prevented by theorem [ prop : existence ] .[ prop : leave r_2 and r_3 ] if then .we follow the proof of proposition [ prop : r1_r2 ] with . [prop : r_1 to r_3 ] if then ._ let be a sequence of stopping times defined by and by construction if for some , then for all .following propositions [ prop : r1_r8 ] , [ prop : r1_r2 ] and [ prop : leave r_2 and r_3 ] , for all almost surely , and .we prove in step 2 that this implies providing that holds we immediately get ._ step 2 . _if , then for all , and according to proposition [ prop : leave r_2 and r_3 ] , does not converge to the set .since is a positive decreasing process for , doob s martingale convergence theorem implies that converges pathwise in .assume now that does not converge to with on .then for any , and for almost every if the integral explodes to for some on some non null subset , then for almost every , and for almost every , implying that converges to on , a contradiction with , so that holds on .we then consider the random time , being the first time such that [ eq : limite_borne ] _0^t _ , n ds c_- , and the smallest index such that .note that is not a -stopping time and is not -adapted since they depend on which is -measurable .implies that there exists a random time such that , otherwise we would have a contradiction of on a subset of : this implies that for almost every , and being a continuous process this is impossible for small enough since is strictly decreasing and thus is a null set . holds .we show that starting from , reaches in finite time almost surely .[ prop : r2_to_r4 ] if then ._ we consider with , and aim to prove that the process with is a supermartingale on , for .notice that it is bounded in .applying it to first gives \ ; .\ ] ] then , noticing that , we obtain it is clear that for all .now notice that for , we have so that \\ & = ( x-\we x)^2 x \we x\l[\frac{y}{x}-\frac{\we y}{\we x}\r ] < 0 \ ; .\end{aligned}\ ] ] now on , implies that , so that \\ & = ( x-\we x)^2x\l(y-\we y\r)<0 \;.\end{aligned}\ ] ] denoting , we conclude that is a supermartingale for . using optional sampling theorem , assisted by proposition [ prop : leave r_2 and r_3 ], almost surely and since then _ step 2 ._ according to proposition [ prop : leave r_2 and r_3 ] , almost surely for any , and according to proposition [ prop : r_1 to r_3 ] , for all .taking , we define the sequence with and we then have for any .the sequence is decreasing in the sense of inclusion , so that using baye s rule , using step 1 of the present proof and the markov property of , plugging this inequality into concludes the proof .notice that by choosing properly in the above proof , it is possible to be arbitrarily close to in finite time .the device is used later in proposition [ prop : r7_r8 ] .[ prop : r4_to_r5 ] if then ._ we claim that almost surely .consider the function .the process is a positive supermartingale on : according to doob s martingale convergence theorem , converges point - wise with .let and define .then on , and similarly to proposition [ prop : leave r_2 and r_3 ] , we use theorem [ th : reccurent domain ] to assert that leaves in finite time almost surely .this being true for any , for almost every . in ,this is only possible if also , implying that on this set .this being improbable , almost surely ._ by denoting , we then define the sequence by if , then , according to proposition [ prop : r2_to_r4 ] , the process reaches back in finite time . using step 1, we have that . by construction and proposition [ prop :r2_to_r4 ] , for therefore , and the sequence of sets is decreasing in the sense of inclusion .altogether we get now using bayes formula and , we finally obtain for every putting and together , implies that _ step 3 .let .according to the process is a supermartingale on ] on .this implies that converges with to on . since ,this convergence is improbable .finally we prove that if , then the process reaches in finite time almost surely .one can notice that proofs are very similar to those of subsections [ sec : west ] and [ sec : south ] .[ prop : leave_r6_union_r7 ] if then .define the sequence of regions through where is sufficiently small to have .applying it to we find that for all +\frac{1}{4}\frac{x^2}{\wt x - x}\sigma^2(y ) \r ] \le -\frac{x^2 \sigma^2(y)}{8(\wt x - x)^{3/2}}\\ & & \le -\frac{k^2 \sigma^2(1-k / n)}{8\sqrt{n}(n\wt x - k)^{3/2 } } < 0 \end{aligned}\ ] ] while in .doob s supermartingale convergence theorem implies the existence of the pointwise limit almost surely , where we use the notation .in addition , theorem [ th : reccurent domain ] guarantees that every set is exited in finite time almost surely . consequently if , we have that either or , a contradiction in either way . [ prop : r7_r8 ] if then .the proof is identical to the one of proposition [ prop : r2_to_r4 ] , with small modifications . here with and .the process is a supermartingale on if we chose where are two positive constants given by \times [ \wt y , \we y ] } x \wt x \sigma^2(y ) ] the justification is the following .the domain contains the area of interest . using proposition [ prop : r2_to_r4 ], we can prove that is a supermartingale on \x [ \we y , \wt y] ] , - x(y - \we y)[\phi(y)-\alpha]\\ & + \sigma^2(y)\left[y(x-\we x ) - x ( y - \we y ) + \we x \left(y - x(1/c - \we x + \we x)\right)\right]\\ \le & \ y ( x - \we x)[\kappa(x)-\gamma ] - x(y - \we y)[\phi(y)-\alpha ] - x\we x \sigma^2(y)(1/c - \we \theta)\\ \le & \ m - m(1/c - \we \theta)\le 0 \;.\end{aligned}\ ] ] we then reproduce step 2 of the proof of proposition [ prop : r2_to_r4 ] , using propositions [ prop : r5_to_r7 ] and [ prop : leave_r6_union_r7 ] instead of propositions [ prop : r1_r2 ] and [ prop : leave r_2 and r_3 ] .[ prop : r8_r1 ] if then .we follow proposition [ prop : r4_to_r5 ] with the minor following modifications . * 1 * we consider the exit time of .the process with verifies for some .indeed and is null only if , whereas only if . applying theorem [ th : reccurent domain ] to , almost surely ._ step 2 . _ if , then the process reaches in finite time almost surely according to proposition [ prop : r7_r8 ] .we define the sequence with and proceeding as in step 2 proposition [ prop : r4_to_r5 ] , we obtain that implies that [ eq : limite proba ] __ define , which is strictly positive according to step 1 . consider the new process with .it is a positive submartingale on ] with .if we implicitly assume that the perturbation of the average growth rate of the productivity is due to the flow of workers coming in and out of the fraction employed at time , assumption [ ass : form of vol ] conveniently expresses that this perturbation decreases with the employment rate since higher employment implies lower perturbation on the constant average rate .other models can of course be considered .assumption [ ass : form of vol ] together with assumption [ ass : form of functions ] , and comparing with , satisfy assumption [ ass : growth conditions ] . indeed for all , and along with the sub - linearity of the log function , in line with assumptions [ ass : non - linear goodwin ] and [ ass : growth conditions ] the vertical asymptote at implies that for some . under assumption[ ass : growth conditions ] and following and , and .assumption [ ass : form of vol ] also implies that is the root of a quadratic polynomial the latter shall have a unique root in to satisfy assumption [ ass : unique_root ] .the following example of condition is sufficient .[ ass : unique root ] we assume and .we are now able to claim the existence of such that , where is defined in proposition [ prop : estimate ] .a direct application provides using , this estimate becomes and is an explicitly calculable constant .following the same procedure with , with the same and .now choosing for some , so that , proposition [ prop : estimate ] provides theorem [ th : finite period ] is a straightly observable phenomenon with simulations , see fig .[ fig : panel_goodwin_stochastic_examples ] . under the assumptions of this section ,the system has been simulated using xppaut with a fourth order runge - kutta scheme for the deterministic part , and an euler scheme for the brownian part .[ fig : panel_goodwin_stochastic_examples ] illustrates the effect of the volatility level on trajectories of the system , as for the economic quantity . of subsample paths of trajectories for with different values of volatility , starting from the green star and stopping at the red start .right column : evolution of output over time for the subsample path.,scaledwidth=90.0%,height=415 ] apart from specific subregions of as or where corollary 3.2 in can provide an estimate for the expectation of the exit time , a bound for the expected period seems out of reach .numerical simulations have nevertheless always provide reasonnable finite periods of stochastic orbits of .we thus expect that is finite for a wide range of values of .let us start with and reformulate of definition [ def : stochastic orbit ] as the time the process crosses the line for the second time .this is equivalent to take . resorting to numerical methods , we have simulated the system times for different starting points in and recorded the position at the time when this line is crossed the second time , that is the positions after a full loop . fig .[ fig : goodwin_stochastic_expected_loop ] contains such examination for an array of values of .the expected time to complete a full - loop is also illustrated . as observed ,there seems to be a stable attractive fixed point to for sufficiently large values of . if the starting point is picked too close to , the expected crossing value after one loop is further away from it . on the other hand ,if the one starts extremely far away from , say with , then the expected value after on loop is higher .this implies that after many loops , the expectation converges , and so does with the number of loops around . assuming that for enough initial points , theorem [ th : reccurent domain ] can be used with at points and to prove the following conjecture .[ prop : fixed_point ] consider the function such that is a solution to with , and is the finite stopping time defined by theorem [ th : finite period ]. then has at least one fixed point in .after one full loop ( left ) , and expected elapsed time ( right ) .computation performed in matlab , with simulations for every value single one of the initial values taken along the line .,height=415 ]this contribution attempts to draw the attention of dynamical system analysis onto macroeconomic models . before looking into complex models of finance and crises , e.g. , we focus here on a brownian perturbation added into a non - linear version of the lotka - volterra system used in economics , the goodwin model . to begin with, we recall the usual results for the deterministic planar oscillator : we provide the constant entropy function and describe the period of the closed orbits drawned by the system .we then provide sufficient conditions for the stochastically perturbed system to stay in the meaningful domain which is a a bounded subset of for the -component .the entropy function is actually of great use for the last result , additionally to prior estimates on variations of the system .we finally prove what seems a fundamental and staightforward property of the system , namely that a solution rotates with perturbations around a unique point .the definition of stochastic orbits provided here conventienly suits the intuition of how the deterministic concept can be extended .however it has clearly not the ambition to be a definitive concept and further investigations might confirm its usefulness or its precarity .the proof exploits the concept of reccurent domains in an intensive manner .we expect that economists seek interest in , as other perturbed macroeconomic systems ( e.g. ) , for the possibility to adjust the model to observed past data ( e.g. and ) and find a possible synthetic explanation for perturbations of business cycles ( see ) .both authors want to thank matheus grasselli for leading ideas and presentation suggestion .remaining errors are authors responsibility . | this paper examines the cycling behavior of a deterministic and a stochastic version of the economic interpretation of the lotka - volterra model , the goodwin model . we provide a characterization of orbits in the deterministic highly non - linear model . we then study the cycling behavior for a stochastic version , where a brownian noise is introduced via an heterogeneous productivity factor . sufficient conditions for existence of the system are provided . we prove that the system produces cycles around an equilibrium point in finite time for general volatility levels , using stochastic lyapunov techniques for recurent domains . numerical insights are provided . * keywords * : lotka - volterra model ; goodwin model ; brownian motion ; random perturbation ; business cycles ; stochastic lyapunov techniques . |
calcium imaging , first used to measure the activity of neurons in the early 1990s , has been successfully applied throughout the nervous system .it allows us to see the behavior of neurons in awake behaving mice , using either chemical or genetic calcium indicators , with confocal microscopy , two - photon microscopy , or wearable imaging devices . as a result , it is an increasingly useful tool for identifying the neural substrate of mouse behaviors . however , calcium imaging videos have difficult noise properties , including white noise and motion artifacts which must be corrected in a preprocessing step before proper analysis can be undertaken .motion correction is the first step in the analysis of calcium images .after they are motion - corrected , rois are identified , and time - activity graphs are made from each roi . if the motion - correction is low - quality , then the time - activity graphs suffer , and the reconstructed networks may have errors . for real timeclosed loop operation , if the motion correction is slow , it can not be done while the mouse is in the microscope , and the experiment fails .turboreg is a commonly used algorithm to do motion correction .it uses a downsampling strategy , which is prerequisite for speed , and it uses a template image , which is necessary for accuracy .we have developed a similar method , called _ moco _ ( motion corrector ) , which adopted both strategies , since correcting one image against the next in the stack results in unacceptable roundoff errors .other approaches use hmms , or other techniques , , , , . is the only one similar mathematically to , and may be slightly faster than , moco , but it has accuracy problems ( see figure 2 ) .moco uses downsampling and a template image , and it can be called from imagej .however , it is faster than turboreg at translation - based motion correction because it uses dynamic programming and two - dimensional fft - acceleration of two - dimensional convolutions . also uses the fft approach but uses a different objective function that does not require dynamic programming ; we believe that our approach is more robust to corrupted data , see figure 2 .image stabilizer is as fast for small images , but is very slow for standard - size images .running on our own datasets , moco appears faster than all approaches compatible with imagej .moco corrects every image in the video by comparing every possible translation of it with the template image , and chooses the one which minimizes the norm of the difference between the images in the overlapping region , , divided by the area of . the fact that it isso thorough makes it robust to long translations in the data .more complicated non - translation image warps are usually unnecessary for fixing calcium images , which suffer from spurious translations , which moco corrects , and spurious motion in the z - direction , which is very hard to correct .our approach also uses cache - aware upsampling : when an image is aligned with the template in the downsampled space , it must be jittered when it is upsampled to see which jitter best aligns with the upsampled template .we do this in such a way that data that is used recently is reused immediately , making the implementation faster . hence , moco is an efficient motion correction of calcium images , and is likely to become a useful tool for processing calcium imaging movies .let , for and be an image in the stack .we assume is downsampled if it is larger than .let be the template image against which to align .we want to pick such that , where is input by the user , and is minimal , where is the set of ordered pairs of integers such that , , , and .if we do this for every image in the stack , we have then motion corrected the video , and we are done , up to a short upsampling step . to upsample , multiply the optimal by and do a local search to minimize on the finer grid .now , the first two sums can be computed via dynamic programming .let s consider when and are negative .let we have that hence , the first two sums can be computed for all in time , which is unaffected by a constant amount of downsampling .it suffices to compute for all such that , let be rotated degrees . using matlab notation , let ;{\rm zeros(w , n+w)}]),\ ] ];{\rm zeros(w , n+w)}]).\ ] ] commas denote horizontal concatenation , semicolons denote vertical concatenation , and is an matrix of zeros . for equally sized matrices , , let mean .then is a rearrangement of .since s are fast , that means can be computed for all in time .hence , after upsampling , the entire video can be aligned in time , where is the number of slides in the video .after are chosen to minimize , they are multiplied by two multiple times to upsample .every time they are multiplied by , are computed for to see which and are minimal .these nine evaluations of are done with a cache - aware algorithm for speed .we compare moco in speed to turboreg on its translation mode , using both the `` fast '' and `` accurate '' settings .we also compare it to image stabilizer using its default settings ( it can be made faster by changing the settings but the accuracy is poor ) .we use several real calcium imaging videos , which we say are if they contain slides of size .if the images are larger than , we downsample once , otherwise , we do not downsample .we have found that dowsampling and times causes severe errors so we avoid those settings .in addition , we have compared moco to turboreg on synthetic images with severe translational motion artifacts and have found that moco is slightly more accurate .all times are in seconds .the template used for every video is the first image in the video for both moco and turboreg .moco uses a maximum translation width of in both the and directions ..2 in [ cols="^,^,^,^,^",options="header " , ]the first author would like to thank julia sable for her help , and inbal ayzenshtat , jesse jackson , jae - eun miller , luis reid , weijian yang , and weiqun fang for their datasets .the first author would also like to thank the columbia university data science institute for their ongoing support .this research is supported by the nei ( dp1ey024503 , r01ey011787 ) , nihm ( r01mh101218 , r41mh100895 ) and darpa contract w91nf-14 - 1 - 0269 .this material is based upon work supported by , or in part by , the u. s. army research laboratory and the u. s. army research office under contract number w911nf-12 - 1 - 0594 ( muri ) .david s. greenberg , jason n.d .kerr , `` automated correction of fast motion artifacts for two - photon imaging of awake animals , '' _ journal of neuroscience methods _ , volume 176 , issue 1 , 15 january 2009 , pages 115 .patrick kaifosh , jeffrey d. zaremba , nathan b. danielson , and attila losonczy , `` sima : python software for analysis of dynamic fluorescence imaging data , '' front neuroinform .2014 ; 8 : 80 . published online 2014 sep 23 | motion correction is the first in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information , for example the network structure of the neurons therein . fast motion correction would be especially critical for closed - loop activity triggered stimulation experiments , where accurate detection and targeting of specific cells in necessary . our algorithm uses a fourier - transform approach , and its efficiency derives from a combination of judicious downsampling and the accelerated computation of many norms using dynamic programming and two - dimensional , fft - accelerated convolutions . its accuracy is comparable to that of established community - used algorithms , and it is more stable to large translational motions . it is programmed in java and is compatible with imagej . |
explicit strong stability preserving ( ssp ) runge kutta methods were developed for the time evolution of hyperbolic conservation laws , with discontinuous solutions .these works studied total variation diminishing ( tvd ) spatial discretizations that can handle discontinuities .the spatial discretizations used to approximate were carefully designed so that when the resulting system of odes ( where is a vector of approximations to , ) is evolved in time using the _ forward euler method _ , the solution at time satisfies a strong stability property of the form under a step size restriction the term can represent , as it did in the total variation semi - norm , or indeed any other semi - norm , norm , or convex functional , as determined by the design of the spatial discretization .these spatial discretizations satisfy the strong stability property _ when coupled with the forward euler time discretization _ , but in practice a higher order time integrator , that will still satisfy this property , is desired . to accomplish this, we attempt to re - write a higher order time discretization as a convex combination of forward euler steps , so that any convex functional property that is satisfied by the forward euler method will still be satisfied by the higher order time discretization .an -stage explicit runge kutta method can be written in the form , if all the coefficients and are non - negative , and a given is zero only if its corresponding is zero , then each stage can be rearranged into a convex combination of forward euler steps where if any of the s are equal to zero , the corresponding ratio is considered infinite .the last inequality above follows from the strong stability conditions and and the consistency condition . from thiswe can conclude that whenever the explicit runge kutta method can be decomposed into convex combinations of forward euler steps , then any convex functional property satisfied by forward euler will be _ preserved _ by the higher - order time discretizations , perhaps under a different time - step restriction .thus , this type of decomposition where is clearly a sufficient condition for strong stability preservation .it has also been shown that this convex combination condition is necessary for strong stability preservation .if a method does not have a convex combination decomposition into forward euler steps we can find some ode with some initial condition such that the forward euler condition is satisfied but the method does not satisfy the strong stability condition for any positive time - step .methods that can be decomposed like this with with are called strong stability preserving ( ssp ) , and the coefficient is known as the _ ssp coefficient _ of the method .ssp methods guarantee the strong stability of the numerical solution for any ode and any convex functional provided only that the forward euler condition is satisfied under a time step .this is a very strong requirement that leads to severe restrictions on the allowable order of ssp methods , and on the size of the allowable time step we seek high order ssp runge kutta methods with the largest allowable time - step .the forward - euler time step is a property of the spatial discretization method only , and so our aim in searching for time - stepping methods that preserve the strong stability property is to maximize the _ ssp coefficient _ of the method .a more relevant quantity may be the total cost of the time evolution , which in our case translates into the allowable time step relative to the number of function evaluations at each time - step ( typically the number of stages of a method ) .for this purpose we define the _ effective ssp coefficient _ where is the number of stages .this value allows us to compare the efficiency of explicit methods of a given order .it has been shown that all explicit -stage runge kutta methods have an ssp bound , and therefore , but this upper bound is not always attained .in it was shown that explicit ssp runge kutta methods can not exist for order however , in the special case where we consider only linear autonomous problems , explicit ssp runge kutta methods exist for any _ linear _ order .the linear and nonlinear order conditions are equivalent up to and including order , so in this work we consider explicit ssp runge kutta methods that have nonlinear order and , and have higher linear orders . in section [ sec : background ]we review the ssp properties of explicit runge kutta methods and discuss the linear and nonlinear order conditions . using these order conditions andoptimization problem described in , in section [ sec : optimization ] we describe the optimization code in matlab ( based on ) used to find explicit runge kutta methods that have with optimal ssp coefficient . in section[ sec : optimal ] we list some of the new methods and their effective ssp coefficients , and in section [ sec : test ] we demonstrate the performance of these methods on a selection of test problems .strong stability preserving methods were first developed by shu for use with total variation diminishing spatial discretizations . in these works , the authors presented second and third order methods that have .the explicit ssp runge kutta method of order and the method these methods were proven optimal in .it was shown in that no four stage fourth order explicit runge kutta methods exist with positive ssp coefficient . by considering methods with , fourth order methods with order been found .notable among these is the method with ( ) in and the method with ( ) in it was shown that no methods of order with positive ssp coefficients can exist .this restrictive order barriers on explicit ssp runge kutta methods stem in part from the nonlinearity of the odes . for order of accuracy on linear autonomous ode systems , explicit ssp runge kutta methods need only satisfy a smaller set of order conditions .if we require only that the method have high linear order ( ) , then the order barrier is broken and explicit runge kutta methods with positive ssp coefficients exist for arbitrarily high linear orders .optimally contractive explicit runge kutta methods were studied by kraaijevanger in , where he gives optimal _ linear _ methods for many values of and , including , and for any .these methods are interesting because their ssp coefficients serve as upper bounds for nonlinear methods , but they may also be useful in their own right .although ssp methods were first developed for nonlinear problems , the strong stability preserving property can be useful for linear problems such as maxwell s equations and linear elasticity .first and second methods that have stages have been shown to attain the theoretical bound .methods of order with and also exist with .these methods can be found in , and are given here in their canonical shu - osher form : the family of -stage , linear order methods has and : where the coefficients of the final stage of the -stage method are given iteratively by starting from the coefficients of the -stage , first order method and .the family of -stage , linear order methods has and : here the coefficients of the final stage of the -stage method are given iteratively by starting from the coefficient of the forward euler method . however , all these methods with high linear order have low nonlinear order .the idea that we pursue in this paper is the construction of explicit ssp runge kutta methods that have a high linear order while retaining the highest possible nonlinear order .we also consider methods with and for comparison .the idea behind these methods is that they would be the best possible methods ( in terms of ssp coefficient and order ) for linear problems , without compromising order when applied to nonlinear problems .the shu - osher form of an explicit runge kutta method , given in , is most convenient for observing the ssp coefficient .however , this form is not unique , and not the most efficient form to use for the optimization procedure .the butcher form of the explicit method given by ( where the coefficients are place into the matrix and into the row vector ) is unique , so rather than perform a search for the optimal convex combination of the shu - osher form , we define the optimization problem in terms of the butcher coefficients . the conversion from the shu - osher form to butcher form , and from an optimal butcher form to the canonical shu - osher form is discussed in . we follow the approach developed by david ketcheson and successfully used in : we search for coefficients and that maximize the value subject to constraints : where are the order conditions . after this optimizationwe have the coefficients and and an optimal value that define the method .the order conditions appear as the equality constraints on the optimization problem . in this work, we consider methods that have and but have higher linear order . in this subsection, we list these order conditions . *linear order conditions : * given a runge kutta method written in the butcher form with coefficients and ( and where is a vector of ones ) , the order conditions that guarantee order accuracy for a linear problem can be simply expressed as * nonlinear order conditions : * if we want a method to demonstrate the correct order of accuracy for nonlinear problems , the first and second order conditions are the same as above : a method that satisfies these conditions will be second order for both linear and nonlinear problems .two additional conditions are required for third order accuracy : note that when the first of these conditions is satisfied , the second condition is equivalent to , which is the linear third order condition .four more conditions are required for the method to be fourth order for a nonlinear problem note that the first three conditions together imply the fourth order linear order condition in this work we consider the nonlinear order conditions only up to because it is known that there are no explicit ssp runge kutta methods greater than fourth order , but we consider higher order linear order conditions .using david ketcheson s matlab optimization code with our modified order conditions ( described in section [ orderconditions ] ) we produce the optimal linear / nonlinear ( lnl ) methods in this section .this r0.625 code formulates the optimization problem in section [ sec : optimization ] in matlab and uses fmincon to find the coefficients and that yield the largest possible .we set the tolerances on fmincon to .we used this code to generate methods with and .we compare these methods with `` linear '' methods that we generated and matched to known optimal methods .our primary interest is the size of the ssp coefficient for each method .we denote the ssp coefficient for a method with stages , linear order and nonlinear order method .the ssp coefficients for the methods with a given number of stages and linear order are the same as for the corresponding linear methods , ( i.e. ) .this indicates that , for these values of and the additional condition needed for nonlinear third order does not pose additional constraints on the strong stability properties of the method .table [ tab : p2p3 ] shows the ssp coefficients of the and methods with .the coefficients for are known to be optimal because they match the linear threshold in kraiijevanger spaper .r0.67 table [ tab : p4 ] shows the ssp coefficients of the methods for .in bold are the coefficients that match those of the methods .in general , as we increase the number of stages the ssp coefficients for the methods approach those of the methods , as shown in figure [ fig : p2p4a ] .the tables clearly show that the size of the ssp coefficient depends on the relationship between and , so it is illuminating to look at the methods along the diagonals of these tables . clearly , for methods we have an optimal value of and .the and methods all attain this optimal value , but for we have and . however , once we get to a high enough number of stages , all the methods with and that have and .figure [ fig : p2p4b ] shows that for the linear methods ( ) the ssp coefficient is fixed for ( blue dotted line ) , ( red dotted line ) , ( green dotted line ) , ( black dotted line ) , and ( cyan dotted line ) , and that the ssp coefficient of the corresponding methods ( solid lines ) approach these as the number of stages increases .( dotted line ) and ( solid line ) for ( blue ) , ( red ) , ( green ) , ( black ) , and ( cyan ) .the ssp coefficient of the methods ( solid lines ) approach those of the corresponding method as the number of stages increases , scaledwidth=113.0% ] ( dotted line ) and ( solid line ) for ( blue ) , ( red ) , ( green ) , ( black ) , and ( cyan ) .the ssp coefficient of the methods ( solid lines ) approach those of the corresponding method as the number of stages increases , scaledwidth=113.0% ] it is interesting to note that the linear stability regions of the , methods are generally identical .the methods have stability regions that are virtually identical to those of the linear methods when the ssp coefficient is identical .in addition , methods with and all have the same stability regions as the corresponding linear methods , which is not surprising as the stability polynomial of is unique . for the rest of the methods, we observe that for a given number of stages , as the linear order increases the linear stability regions of the methods look closer to those of the linear methods .a nice illustration of this is the family of methods , shown in figure [ fig : linstab2 ] .it is known in the literature that some methods with nonlinear orders and achieve the linear threshold value .a nice example of this is ketcheson s ssp runge kutta method of , , which achieves the threshold value .this suggests that the linear order conditions are very significant to the value of the ssp coefficient .indeed , we see this relationship in tables [ tab : p2p3 ] and [ tab : p4 ] , as we move right from column to column we see a significant drop in ssp coefficient . for each application, one must decide if a higher linear order is valuable , as we pay a price for requiring additional .however , once one has decided that the cost of a higher linear order is useful , there is no penalty in terms of ssp coefficient for requiring a higher nonlinear order and , in most cases , little reason not to use .our results show that if one wishes to use a method with high linear order , then requiring or even rather than the standard is not usually associated with significant restriction on the ssp coefficient .this can be beneficial in cases where the solution has linear and nonlinear components that need to be accurately captured simultaneously , or in different regions , or at different time - levels , so that the use of an ssp method that has optimal nonlinear order and higher linear order would be best suited for all components of the solution . for linear ( blue ), ( red ) and ( green ) methods .the methods approach the and methods as increases.,title="fig : " ] for linear ( blue ) , ( red ) and ( green ) methods .the methods approach the and methods as increases.,title="fig : " ] + for linear ( blue ) , ( red ) and ( green ) methods .the methods approach the and methods as increases.,title="fig : " ] for linear ( blue ) , ( red ) and ( green ) methods .the methods approach the and methods as increases.,title="fig : " ] +in this section , the optimized lnl methods described in section [ sec : optimal ] are tested for convergence and ssp properties .first , we test these methods for convergence on both odes and pdes to confirm that the new methods exhibit the desired linear and nonlinear orders .next , we study the behavior of these methods in conjunction with a higher order weno spatial discretizations , and show that although the weno method is nonlinear , when applied to a linear smooth problem the higher order linear order is beneficial .on the other hand , for nonlinear problems , both with shocks and without , the higher order nonlinear order is advantageous . finally , the lnl methods are tested on linear and nonlinear problems with spatial discretizations that are provably total variation diminishing ( tvd ) and positivity preserving .we show that for the linear case , the observed time - step for the time stepping method to preserve the tvd property is well - predicted by the theoretical ssp coefficient , while for positivity and for the nonlinear problem the theoretical time - step serves as a lower bound .* example 1 : nonlinear ode convergence study . *the van der pol problem is a nonlinear system of odes : we use and initial conditions .this was run using ( 9,6,2 ) , ( 9,6,3 ) , ( 9,6,4 ) , ( 10,9,2 ) , ( 10,9,3 ) , and ( 10,9,4 ) lnl runge kutta methods to final time , with where .the exact solution ( for error calculation ) was calculated by matlab s ode45 routine with tolerances set to abstol= and reltol= . in figure[ fig : vdpconv ] we show that the of the errors in the first component vs. the of the number of points .the slopes of these lines ( i.e the orders ) are calculated by matlab s polyfit function . as expected, the rate of convergence follows the nonlinear order of the method .in fact , we observe that a higher linear order is of no benefit at all for this example .* example 2 : pde convergence study .* in this study we solve the linear advection equation with sine wave initial conditions and periodic boundaries \\\nonumber u(0,x ) & = & \sin(4\pi x ) \ ; \ ; \ ; \ ; u(t,0)=u(t,1 ) \nonumber\end{aligned}\ ] ] the fourier spectral method was used to discretize in space using points . the exact solution to this problemis a sine wave with period that travels in time , so the fourier spectral method gives us an exact solution in space once we have two points per wavelength , allowing us to isolate the effect of the time discretization on the error .we run this problem for five methods with orders , .our final time is , and the time step , where .errors are computed at the final time by comparison to the exact solution .figure [ fig : spectral ] shows the of the norm of the errors vs. of the number of points .the slopes of these lines ( i.e the orders ) are calculated by matlab s polyfit function , and demonstrate that the methods achieved the expected linear convergence rates ., on the right . in both cases , the linear order dominates.,title="fig : " ] , on the right . in both cases , the linear order dominates.,title="fig : " ] * example 3 : linear advection with weno * we repeat the example above , this time using the 15th order ( ) weno method to discretize in space with for .the weno method is a nonlinear method , so that even if the pde is linear , the resulting system of odes is not .however , we can decompose the weno method into a linear part and a nonlinear correction term that suppresses oscillations . in theory ,when the equation is linear and solution is smooth , the weno method is close to linear .we test this problem with selected lnl runge kutta time discretizations of linear order and , and with the shu - osher ssp runge kutta ( 3,3 ) and ketcheson s ssp runge kutta ( 10,4 ) . as above, our final time is , and the time step , where .errors are computed at the final time by comparison to the exact solution .figure [ fig : linweno ] shows the of the norm of the errors vs. of the number of points , and the slopes of these lines ( i.e the orders ) as calculated by matlab s polyfit function .we observe that the linear order dominates for this problem , which indicates that in regions where the problem is primarily linear and the solution smooth , the new lnl methods with higher linear orders could be of benefit .* example 4 : burgers equation with weno .* in the previous example we demonstrated the advantages of using a time - stepping method with higher with weno in the case of a linear , smooth problem . in this example, we show how a higher nonlinear order is beneficial when dealing with a nonlinear equation with possibly discontinuous solution .consider burgers equation with symmetric sine wave initial conditions and periodic boundaries . \\\nonumber u(0,x ) & = & \sin(2\pi x ) \ ; \ ; \ ; \ ; u(t,0)=u(t,1 ) .\nonumber\end{aligned}\ ] ] this problem develops a standing shock .we use a 15th order weno scheme with points in space , and test the lnl time - stepping methods of linear order and nonlinear order .we use a time - step where . in figure [fig : burgersweno ] we show the absolute values of the pointwise errors at spatial location for ( top ) and for ( bottom ) .these errors are shown before the shock forms ( at time , solid line ) and after the shock forms ( at time , dotted line ) .observe that for smaller number of spatial points the errors decays very fast , however once we reach a spatial refinement that is small enough we see that the methods with higher have significantly smaller errors . if we consider only points , we see the nonlinear order generally dominating : the linear methods feature second order convergence both pre- and post - shock , while the methods are fourth order pre - shock , but jump to twelfth order post - shock ( probably capturing the high order weno behavior ). taken together with the problem in example 3 , this suggests that using a method with high linear and high nonlinear order may be beneficial in examples that have smooth and linear regions and non - smooth nonlinear regions . , before ( solid lines ) and after ( dotted line ) the shock .the top figures show the of the errors vs. the number of points .the bottom plots show the of the errors vs. for .the time stepping methods use are and with nonlinear orders ( red ) and ( blue ) .clearly , methods with higher nonlinear order in time give smaller errors.,title="fig : " ] , before ( solid lines ) and after ( dotted line ) the shock .the top figures show the of the errors vs. the number of points .the bottom plots show the of the errors vs. for .the time stepping methods use are and with nonlinear orders ( red ) and ( blue ). clearly , methods with higher nonlinear order in time give smaller errors.,title="fig : " ] + , before ( solid lines ) and after ( dotted line ) the shock .the top figures show the of the errors vs. the number of points .the bottom plots show the of the errors vs. for . the time stepping methods useare and with nonlinear orders ( red ) and ( blue ) .clearly , methods with higher nonlinear order in time give smaller errors.,title="fig : " ] , before ( solid lines ) and after ( dotted line ) the shock .the top figures show the of the errors vs. the number of points .the bottom plots show the of the errors vs. for .the time stepping methods use are and with nonlinear orders ( red ) and ( blue ) .clearly , methods with higher nonlinear order in time give smaller errors.,title="fig : " ] * example 5 : positivity and tvd time - step for a linear advection equation with first order finite difference in space .* consider the linear advection equation with a step function initial condition : on the domain ] , with periodic boundary conditions .we take and initial condition the problem is semi - discretized using a second order conservative scheme with a koren limiter as in with , and run to . for this problemeuler s method is tvd for .we computed the numerical solution using all the ssp lnl runge kutta methods described in section [ sec : optimal ] and , as above found the largest for which tvd and positivity are preserved . in figure[ fig : bltvd ] we plot these values ( blue for tvd , green for positivity ) compared to the time - step guaranteed by the theory , ( in red ). the observed tvd and positivity time - step are typically significantly larger than the theoretical value .as before , the positivity preserving time - step is larger than the tvd time - step .using the optimization procedure described in , we find ssp - optimized explicit runge kutta methods that have nonlinear order of and and a higher order of convergence on linear autonomous problems .the order barrier of for explicit ssp runge kutta methods indicates the critical importance of the nonlinear order on the ssp property .nevertheless , we find that the _ size _ of the ssp coefficient is typically more constrained by the linear order conditions . as the number of stages increases , the ssp coefficient becomes primarily a function of the relationship between the number of stages and the linear order of the method , and not the nonlinear order .this means that in many cases , we can obtain methods of nonlinear order and linear order that have the same ssp coefficient as methods with nonlinear order and linear order .we verified the linear and nonlinear orders of convergence of the new methods on a variety of test cases .we also showed the behavior of these new lnl time - stepping methods coupled with the weno method for both linear and nonlinear problems , which suggests that these lnl methods may be useful for problems that have regions that are dominated by linear , smooth solutions and other regions where the solution is discontinuous or dominated by nonlinear behavior .finally , we studied the total variation diminishing and positivity preserving properties of these lnl methods on linear and nonlinear problems , and showed that for the linear problems , the theoretical ssp time - step is a very accurate predictor of the observed behavior , while serving only as a lower bound in the nonlinear case .we conclude that where methods with high linear order are desirable , it is usually advantageous to pick those methods that also have higher nonlinear order ( ) . * acknowledgment .* the authors wish to thank prof .bram van leer for the motivation for studying this problem , and prof .david ketcheson for many helpful discussions .this publication is based on work supported by afosr grant fa-9550 - 12 - 1 - 0224 and kaust grant fic/2010/05 . | high order spatial discretizations with monotonicity properties are often desirable for the solution of hyperbolic pdes . these methods can advantageously be coupled with high order strong stability preserving time discretizations . the search for high order strong stability time - stepping methods with large allowable strong stability coefficient has been an active area of research over the last two decades . this research has shown that explicit ssp runge kutta methods exist only up to fourth order . however , if we restrict ourselves to solving only linear autonomous problems , the order conditions simplify and this order barrier is lifted : explicit ssp runge kutta methods of any _ linear order _ exist . these methods reduce to second order when applied to nonlinear problems . in the current work we aim to find explicit ssp runge kutta methods with large allowable time - step , that feature high linear order and simultaneously have the optimal fourth order nonlinear order . these methods have strong stability coefficients that approach those of the linear methods as the number of stages and the linear order is increased . this work shows that when a high linear order method is desired , it may be still be worthwhile to use methods with higher nonlinear order . |
as policymakers , academics and the public at large grow increasingly concerned about the cost of an aging society , we believe it is worthwhile to go back in time and examine the capital market instruments used to finance retirement in a period before social insurance , defined benefit ( db ) pensions , and annuity companies .indeed , in the latter part of the 17th century and for almost two centuries afterwards , one of the most popular `` retirement income '' investments in the developed world was not a stock , bond or a mutual fund ( although they were available ) .in fact , the method used by many individuals to generate income in the senior years of the lifecycle was a so - called tontine scheme sponsored by government .part annuity , part lottery and part hedge fund , the tontine which recently celebrated its 360th birthday offered a lifetime of income that increased as other members of the tontine pool died off and their money was distributed to survivors .the classical tontine investment pool is quite distinct from its public image as a lottery for centenarians in which the longest survivor wins all the money in a pool .in fact , the tontine is both more subtle and more elegant . for those readers who are not familiar with the historical tontine , hereis a simple example .imagine a group of 1,000 soon - to - be retirees who band together and pool 30,000 in interest yearly , which is split amongst the 1,000 participants in the pool , for a 30,000 / 1,000 = guaranteed 30 dividend is split amongst those who still happen to be alive .for example , if one decade later only 800 original investors are alive , the 37.50 dividend each . of this , 7.50 is _ other people s money_. then , if two decades later only 100 survive , the annual cash flow to survivors is 30 guaranteed dividend plus 1,000 in dividends .the extra payments above and beyond the guaranteed 1 to the insurer initially , and receiving in return an income stream of for life .the constraint on these annuities is that they are fairly priced , in other words that with a sufficiently large client base , the initial payments invested at the risk - free rate will fund the called - for payments in perpetuity ( later we discuss the implications of insurance loadings ) .this implies a constraint on the annuity payout function , namely that though is the payout rate per survivor , note that the payout rate per initial dollar invested is .we will return to this later . letting denote the instantaneous utility of consumption ( a.k.a .the felicity function ) , a rational annuitant ( with lifetime ) having no bequest motive will choose a life annuity payout function for which maximizes the discounted lifetime utility : =\int_0^\infty e^{-rt}{}_tp_x\ , u(c(t))\,dt \label{annuityutilityspecification}\ ] ] where is ( also ) the subjective discount rate ( sdr ) , all subject to the constraint . by the euler - lagrange theorem , this implies the existence of a constant such that in other words , is constant , so provided that utility function is strictly concave , the optimal annuity payout function is also constant .that constant is now determined by , showing the following : optimized life annuities have constant , where ^{-1}.\ ] ] this result can be traced back to yaari ( 1965 ) who showed that the optimal ( retirement ) consumption profile is constant ( flat ) and that 100% of wealth is annuitized when there is no bequest motive . for more details and an alternate proof ,see the excellent book by cannon and tonks ( 2008 ) and specifically the discussion on annuity demand theory in chapter 7 . in practice ,insurance companies funding the life annuity are exposed to both systematic longevity risk ( due to randomness or uncertainty in ) , model risk ( the risk that is mis - specified ) , as well as re - investment or interest rate risk ( which is the uncertainty in over long horizons ) .the latter is not our focus here , so we will continue to assume that is a given constant for most of what follows . note that even if re - investment rates were known with certainty , the insurance company would likely pay out less than the implied by equation as a result of required capital and reserves , effectively lowering the lifetime utility of the ( annuity and the ) retiree .this brings us to the tontine structures we will consider as an alternative , in which a predetermined dollar amount is shared among survivors at every . let be the rate at which funds are paid out per initial dollar invested , a.k.a . the tontine payout function .our main point in this paper is that there is no reason for tontine payout function to be a constant fixed percentage of the initial dollar invested ( e.g. 4% or 7% ) , as it was historically .in fact , we can pose the same question as considered above for annuities : what is optimal for subscribers , subject to the constraint that sponsors of the tontine can not sustain a loss ?note that the natural comparison is now between and , where is the optimal annuity payout found above .suppose there are initially subscribers to the tontine scheme , each depositing a dollar with the tontine sponsor .let be the random number of live subscribers at time .consider one of these subscribers .given that this individual is alive , .in other words , the number of other ( live ) subscribers at any time is binomially distributed with probability parameter .so , as we found for the life annuity , this individual s discounted lifetime utility is =\int_0^\infty e^{-rt}{}_tp_x \,e[u\big(\frac{n d(t)}{n(t)}\big)\mid\zeta > t]\,dt\\ & \qquad=\int_0^\infty e^{-rt}{}_tp_x\sum_{k=0}^{n-1 } \binom{n-1}{k}{}_tp_x^k(1-{}_tp_x)^{n-1-k}u\big(\frac{nd(t)}{k+1}\big)\,dt.\end{aligned}\ ] ] the constraint on the tontine payout function is that the initial deposit of should be sufficient to sustain withdrawals in perpetuity . of course , at some point all subscribers will have died , so in fact the tontine sponsor will eventually be able to cease making payments , leaving a small remainder or windfall . butthis time is not predetermined , so we treat that profit as an unavoidable feature of the tontine .remember that we do not want to expose the sponsor to any longevity risk .it is the pool that bears this risk entirely .our budget or pricing constraint is therefore that so , for example , if is forced to be constant ( the historical structure , which we call a _ flat tontine _ ) , then the tontine payout function ( rate ) is simply ( or somewhat more if a cap on permissible ages is imposed , replacing the upper bound of integration in by a value less than infinity ) .we are instead searching for the _ optimal _ which we will find is far from constant . by the euler - lagrange theorem from the calculus of variations , there is a constant such that the optimal satisfies for every . note that this expression directly links individual utility to the optimal participating annuity .recall that a tontine is an extreme case of participation or pooling of all longevity risk .equation dictates exactly how a risk - averse retiree will trade off consumption against longevity risk . in other words, we are not advocating an _ ad hoc _ actuarial process for smoothing realized mortality experience .note that an actual mortality hazard rate does not appear in the above equation it appears only implicitly , in both and ( which is determined by ) .therefore , we will simplify our notation by re - parametrizing in terms of the probability : let satisfy substituting into the above equation , it collapses to .the optimal tontine structure is , where is chosen so holds .it is feasible ( but complicated ) to solve this once a generic is given .but in the case of constant relative risk aversion ( crra ) utility it simplifies greatly .let if , and when take instead .define =\sum_{k=0}^{n-1 } \binom{n-1}{k}p^{k}(1-p)^{n-1-k}\big(\frac{n}{k+1}\big)^{1-\gamma}\ ] ] where .. then [ crraoptimum ] with crra utility , the optimal tontine has withdrawal rate , where ^{-1}. \label{d(1)formula}\ ] ] suppose .then the equation for becomes that the constraint now implies .a similar argument applies when .it is worth emphasizing that does not depend on the particular form of the mortality hazard rate , or on the interest rate , but only on the longevity risk aversion and the number of initial subscribers to the tontine pool , .in other words , the mortality hazard rate and enter into the expression for only via the constant .we will prove the following in section [ proof1 ] of the appendix : for any and [ betabound ] 1 . increases with ; 2 . and ; 3 . , , ; 4 . and ; 5 .for we have that one immediate consequence of corollary [ crraoptimum ] and ( a ) of lemma [ betabound ] is that the optimal tontine payout rate decreases with .by ( d ) we have that for , so by dominated convergence , as . therefore the ( historical ) flat tontine structure is optimal in the limit as .likewise for .renormalizing , this implies that concentrates increasingly near .in other words , as we approach risk - neutrality , the optimal tontine comes closer and closer to exhausting itself immediately following .we define a _ natural tontine _ to have payout where is proportional to , just as is the case for the annuity payment per initial dollar invested . comparing the budget constraints and, we get that , so the natural tontine payout rate agrees with that of the annuity ( which justifies our singling it out ) . by corollary [ crraoptimum ] and ( c ) of lemma [ betabound ], we see that the natural tontine is optimal for logarithmic utility .we will see in section [ numerics ] that the natural tontine is close to optimal when is large .we therefore propose the natural tontine as a reasonable structure for designing tontine products in practice , rather than expecting insurers to offer a range of products with differing s .figure [ fig5 ] shows the ratio of to , with gompertz hazard rate , and parametrized by via .this exponentially - rising hazard rate is the basic mortality assumption in much of the actuarial literature .the figure provides numerical evidence that higher risk aversion implies a preference for reducing consumption at early ages in favour of reserving funds to consume at advanced ages . to make this statement precise ,define namely the present value of payouts from the optimal tontine through time , per initial dollar invested ( or equivalently , the proportion of initial capital used to fund payouts till time ) .we conjecture that in any realistic situation ( specifically , whenever the morality distribution has an increasing hazard rate ) .we have not yet succeeded in proving this conjecture , but instead will derive a number of partial results that support it .in fact , numerical evidence suggests that holds regardless of the mortality distribution , provided is only of moderate size .it is in fact possible to construct pathological mortality distributions for which fails when and are large . specifically , this may actually happen during a low - hazard rate lull between two periods with high hazard rates .we discuss this at greater length in the appendix , where we will also prove the following instances of the above conjecture .[ increasinggamma ] holds in the following situations : 1 . for and arbitrary; 2 . for any fixed , , and , in the limit as ; 3 . for some initial period of time , where depends on and .we will also prove a modified version of , for fixed and , in the limit as ( see theorem [ asymptotictheorem ] ) .in section [ numerics ] we will provide a variety of numerical examples that illustrate the optimal tontine payout function as a function of longevity risk aversion and the initial size of the tontine pool .we now compare the annuity and tontine .[ tontineannuitycagematch ] let denote the utility of the optimal tontine . to compute this ,suppose , and observe that by .the utility of the optimal tontine is therefore precisely by and .consider instead the utility provided by the annuity , namely for any and .[ utilityinequality ] for this follows from ( e ) of lemma [ betabound ] and the calculations given above .we show the case in the appendix .of course , the conclusion is economically plausible , as the tontine retains risk that the annuity does nt .the insurer offering an annuity will be modelled as setting aside some fraction of the initial deposits to fund the annuity s costs . in other words , a fraction of the initial depositsare deducted initially , to fund risk management , capital reserves , etc .the balance , invested at the risk - free rate will fund the annuity .therefore , with loading , becomes that , which implies that is the optimal payout structure for the annuity .the utility of the loaded annuity is therefore for , and for . in section [ numerics ]we will consider numerically the _ indifference loading _ that , when applied to the annuity , makes an individual indifferent between the annuity and a tontine , i.e. .it turns out that the loading decreases with , in such a way that the _ total _ loading stays roughly stable . in other words , there is at most a fixed amount ( roughly ) that the insurer can deduct from the _ aggregate _ annuity pool , regardless of the number of participants , before individuals start to derive greater utility from the tontine .we will illustrate this observation , at least for , by proving the following inequality in the appendix : [ loadinginequality ] suppose that . then .note that , since .there are several issues we do not address here which we openly acknowledge and leave for future research .in particular , there is the role of _ credit risk _ as well as the impact of _ stochastic mortality _ which will change the dynamics between tontines and annuities .the existence of credit risk is a much greater concern for the buyers of life annuities , vs. tontines , given the risk assumed by the insurance sponsor . likewise , under a stochastic mortality model the tontine payout would be more variable and uncertain , which might reduce the utility of the tontine relative to the life annuity . on the other hand , the capital charges added to the market price of the life annuity would be higher in a stochastic mortality model .we appreciate that these are an unanswered ( or unaddressed ) questions and leave the examination of the robustness of the _ natural tontine _ payout in a full stochastic mortality environment to a subsequent paper .we note that there is also the question of asymmetric mortality , in which the individual believes that his or her ( subjective ) hazard rate is less than that of the typical tontine investor ( for whom the tontine payout function is optimized ) .indeed , this is potentially the explanation for the significant fraction of investors in king william s tontine , who did not exercise their option to convert to an annuity .they ( might have ) thought their nominee was _ much _ healthier than everyone else , and the only way to capitalize on that was to invest in the tontine .this leads to the idea that the _ mortality loading _ , or level of asymmetric mortality , would induce an individual to prefer the tontine to an annuity .this , obviously , ties into the issue of stochastic mortality and is being addressed in a follow - up paper by milevsky and salisbury ( 2015 ) . in the next section ,we focus on the numerical implications of our current model .figure [ fig1 ] displays the range of possible 4% flat tontine dividends over time , assuming an initial pool of nominees , under a gompertz law of mortality with parameters and .this mortality basis corresponds to a survival probability of , i.e. from age 65 to age 100 and is the baseline value for several of our numerical examples .the figure clearly shows an increasing payment stream conditional on survival , which is nt the optimal function .* figure [ fig1 ] goes here * indeed , in such a ( traditional , historical ) tontine scheme , the initial expected payment is quite low , relative to what a life annuity might offer in the early years of retirement ; while the payment in the final years for those fortunate enough to survive would be both very high and quite variable .it is not surprising then that this form of tontine is both suboptimal from an economic utility point of view and also is nt very appealing to individuals ( with reasonable risk aversions ) who want to maximize their standard of living over their entire retirement lifespan .* figure [ fig2 ] goes here * in contrast to figure [ fig1 ] which displays the sub - optimal flat tontine , figure [ fig2 ] displays the range of outcomes from the optimal tontine payout function , under the same interest rate and mortality basis . to be very specific , figure [ fig2 ]is computed by solving for the value of and then constructing for and .once the payout function is known for all , the number of survivors at the 10th and 90th percentile of the binomial distribution is used to bracket the range of the payout from age 65 to age 100 .clearly , the expected payout per survivor is relatively constant over the retirement years , which is much more appealing intuitively .moreover , the discounted expected utility from a tontine payout such as the one displayed in figure [ fig2 ] is much higher than the utility of the one displayed in figure [ fig1 ] .* table [ table06 ] goes here * table [ table06 ] displays the optimal tontine payout function for a very small pool of size .these values correspond to the values derived in section [ theory ] .notice how the optimal tontine payout function is quite similar ( identical in the first significant digit ) regardless of the individual s longevity risk aversion ( lora ) , even when the tontine pool is relatively small at .the minimum guaranteed dividend starts off at about 7% at age 65 and then declines to approximately 1% at age 95 .of course , the actual cash flow payout to an individual , conditional on being alive does not necessarily decline and in reality stays relatively constant .* table [ table07 ] goes here * table [ table07 ] displays utility indifference loadings for a participant at age 60 .notice how even a retiree with a very high level of longevity risk aversion ( lora ) , will select a tontine ( with pool size ) instead of a life annuity if the insurance loading is greater than 7.5% ; for less extreme levels of risk aversion , and larger pools , the required loading is much smaller ( e.g. 30 b.p . ) .recall that this is a one - time charge at the time of initiation , not an annualized value over the duration of the contract ( which would be a significantly smaller value ) .* table [ table08 ] goes here * table [ table08 ] computes certainty equivalent factors associated with natural tontines .see section [ moreoncertaintyequiv ] for further discussion of this comparison .if an individual with lora is faced with a tontine structure that is only optimal for someone with lora ( i.e. logarithmic utility ) the welfare loss is minuscule .this is one reason we advocate the _ natural tontine _ payout function , which is only precisely optimal for , as the basis for 21st - century tontines .* figure [ fig3 ] goes here * figure [ fig3 ] compares the optimal tontine payout function for different levels of longevity risk aversion , and shows that the difference is barely noticeable when the tontine pool size is greater than , mainly due to the effect of the law of large numbers .this curve traces the _ minimum _ dividend that a survivor can expect to receive at various ages ( i.e. when all purchasers survive ) .the median is ( obviously ) much higher ( see figure [ fig2 ] ) .* figure [ fig4 ] goes here * figure [ fig4 ] illustrates that the optimal tontine payout function for someone with logarithmic utility starts off paying the exact same rate as a life annuity regardless of the number of participants in the tontine pool , ie .but , for higher levels of longevity risk aversion and a relatively smaller tontine pool , the function starts off at a lower value and declines at a slower rate .the perpetuity curve corresponds to .* figure [ fig5 ] goes here * figure [ fig5 ] shows that if we compare two retirees , the one who is more averse to longevity risk will prefer a higher guaranteed minimum payout rate ( gmpr ) at advanced ages . in exchangethey will accept a lower gmpr at younger ages .we choose to quantify this by taking the natural tontine payout as our baseline , since this is the product structure we advocate .we take quite a small pool size here ( ) to illustrate the effect .the effect persists at larger pool sizes , but is much less dramatic .* figure [ fig6 ] goes here * figure [ fig6 ] shows an actuarially - fair life annuity that guarantees 7.5% for life starting at age 65 .it provides more utility than an optimal tontine regardless of longevity risk aversion ( lora ) or the size of the tontine pool .but , once an insurance loading is included , driving the annuity yield under the initial payout from the optimal tontine , the utility of the life annuity might be lower .the indifference loading is and is reported in table [ table07 ] .so here is our main takeaway and idea in the paper , once again .the historical tontine in which dividends to the entire pool are a constant ( e.g. 4% ) interest rate over the entire retirement horizon are suboptimal because they create an increasing consumption profile that is both variable and undesirable .however , a tontine scheme in which interest payments to the pool early on are higher ( e.g. 8% ) and then decline over time , so that the few winning centenarians receive a much lower interest rate ( e.g. 1% ) is in fact the optimal design .coincidently , king william s 1693 tontine had a similar declining structure of interest payments to the pool , which was quite rare historically .we are careful to distinguish between the guaranteed _ interest _ rate ( e.g. 8% or 1% ) paid to the entire pool , and the expected _ dividend _ to the individual investor in the optimal tontine , which will be relatively constant over time , as is evident from figure [ fig2 ] .of course , the present value of the interest paid to the entire pool over time is exactly equal to the original contribution made by the pool itself .we are simply re - arranging and parsing cash flows of identical present value , in a different manner over time .we have shown that a tontine provides less utility than an actuarially - fair life annuity , which is reasonable given that the tontine exposes investors to longevity risk .what is striking is that the utility difference from a properly - designed tontine scheme is actually quite small when compared to an actuarially - fair life annuity , which is the workhorse of the pension economics and lifecycle literature .in fact , the utility from a tontine might actually be higher than the utility generated by a pure life annuity when the insurance ( commission , capital cost , etc . )one - time loading exceeds 10% .this result should not negate or be viewed as conflicting with the wide - ranging annuity literature which proves the optimality of life annuities in a lifecycle model .both tontines and annuities hedge idiosyncratic mortality risk .in fact , what we show is that it is still optimal to fully hedge the remaining systemic longevity risk , but the instrument that one uses to do so depends on the relative costs . in other words , _ sharing _longevity risk amongst a relatively small ( ) pool of people does nt create large dis - utilities or welfare losses , at least within a classical rational model .this finding can also be viewed as a further endorsement of the participating life annuity , which lies in between the tontine and the conventional life annuity .this is not the place nor do we have the space for a full review of the literature on tontines , so we provide a selected list of key articles for those interested in further reading . for literature and all sources available for the 1693 tontine , we refer to the earlier mentioned historical paper by milevsky ( 2014 ) as well as the book by milevsky ( 2015 ) .more generally , the original tontine proposal by lorenzo de tonti appears in french in tonti ( 1654 ) ( and in english translation in the cited reference ) .the review article by kopf ( 1927 ) and the book by odonnell ( 1936 ) are quite dated , but document how the historical tontine operated , discussing its checkered history , and providing a readable biography of some of its earliest promoters in denmark , holland , france and england .the monograph by cooper ( 1972 ) is devoted entirely to tontines and the foundations of the 19th - century ( u.s . )tontine insurance industry , which is based on the tontine concept but is somewhat different because of the savings and lapsation component .in a widely - cited article , ransom and sutch ( 1987 ) provide the background and story of the banning of tontine insurance in new york state , and then eventually the entire u.s .the comprehensive monograph by jennings and trout ( 1982 ) reviews the history of tontines , with particular emphasis on the french period , while carefully documenting payout rates and yields from most known tontines . forthose interested in the pricing of mortality - contingent claims during the 17th and 18th century , as well as the history and background of the people involved , see alter ( 1986 ) , poitras ( 2000 ) , hald ( 2003 ) , poterba ( 2005 ) , ciecka ( 2008a , 2008b ) , rothschild ( 2009 ) as well as bellhouse ( 2011 ) , and of course , homer and sylla ( 2005 ) for the relevant interest rates .more recently , the newspaper article by chancellor ( 2001 ) , the book by lewin ( 2003 ) and especially the recent review by mckeever ( 2009 ) all provide a very good history of tontines and discuss the possibility of a tontine revival .the standard actuarial textbooks , such as promislow ( 2011 ) or pitacco , et al .( 2009 ) for example , each have a few pages devoted to the historical tontine principal .more relevant to the modern design of participating annuities and sharing longevity risk in the 21st century , a series of recent papers on pooled annuity funds have resurrected the topic .for example piggot , valdez and detzel ( 2005 ) , valdez , piggott and wang ( 2006 ) , stamos ( 2008 ) , richter and weber ( 2011 ) , donnelly , guillen and nielsen ( 2013 ) as well as qiao and sherris ( 2013 ) have all attempted to reintroduce and analyze tontine - like structures in one form or another . in these proposed structures ,investors contribute to an initial pool of capital , which is invested , and then paid out over time to the survivors .it need not be the case that investments are risk free ( as we have assumed for the tontines we analyze ) , so the control variables are the asset allocation and payout strategy .the prospectus given to pooled annuity investors should therefore specify both .donnelly , guillen and nielsen ( 2013 ) consider a different approach , in the context of pooled annuities , to what we above have called the indifference annuity loading .instead of the loading being payed in a lump sum at the time of purchase ( as we have done ) , they incorporate a variable fee into their guaranteed product , paid continuously over time .they find that this fee may be structured so the utility of the pooled annuity precisely matches that of the guaranteed product it is being compared to , at all times .they term this fee structure the `` lifetime breakeven costs '' , and explore its properties and implications for consumption . to further compare with our results ,let us assume for now that the pooled annuity funds are indeed invested risk - free .following the argument of stamos ( 2008 ) , one would then obtain the optimal _ proportion _ of the remaining funds that should be paid out at each time , as a function of both time _ and the number of individuals remaining in the pool_. in a tontine , the amount withdrawn may vary with time , but not with the number of survivors . in other words , the prospectus for a tontine should lay out a specific dollar amount to be withdrawn in each year of the contract , so any uncertainty in the amount paid to an individual derives simply from the size of the pool with whom the withdrawal is split .a pooled annuity has the latter uncertainty , but in addition has uncertainty about the dollar amount to be withdrawn , which will vary in a path - dependent way according to how the mortality experience of the pool evolves . to know how much you will receive in year 10 , it is no longer enough to know how many survivors there are in year 10 , but you also need to know how many survived in each of years one through nine . there are both advantages and disadvantages to pooled annuities versus tontines . on the plus side, the pooled annuity has an extra variable to control for , so should yield higher utility with risk - free investing we expect its utility to be intermediate between that of a tontine and an annuity .as we have seen , the latter are in fact very close , so the utility gain ( though real ) is actually modest . on the down side , the path - dependent nature of a pooled annuity ( or the annuity overlay introduced by donnelly et .al . ( 2014 ) ) makes it more complex to explain to participants , and more difficult for those participants to evaluate the risks and uncertainties of their income stream .for these reasons , we ( on balance ) advocate tontines .the simpler design of tontines certainly makes their calculations less computationally intensive . at a mathematical level, it also allows us to go further in analyzing their properties , which is an important component of the current paper .the closest other work we can find to our natural tontine payout is that of sabin ( 2010 ) on a fair annuity tontine .he gives a specific interpretation of `` fair '' and analyzes how to adjust tontine payments for heterogeneous ages and initial contributions . a related fairness question for pooled annuitiesis treated in donnelly ( 2015 ) .our natural tontine is `` fair '' by construction because we do nt mix cohorts or ages . in sum , although research on tontine schemes is scattered across the insurance , actuarial , economic , and history journals , we have come across few , if any , scholarly articles that condemn or dismiss the tontine concept outright .it is not widely known that in the year 1790 , the first u.s . secretary of the treasury , alexander hamilton proposed one of the largest tontines in history . to help reduce a crushing national debt something that is clearly not a recent phenomenon he suggested the u.s .government replace high - interest revolutionary war debt with new bonds in which coupon payments would be made to a group , as opposed to individuals .the group members would share the interest payments evenly amongst themselves , provided they were alive .but , once a member of the group died , his or her portion would stay in a pool and be shared amongst the survivors .this process would theoretically continue until the very last survivor would be entitled to the entire interest payment potentially millions of dollars .this obscure episode in u.s .history has become known as hamilton s tontine proposal , which he claimed in a letter to george washington would reduce the interest paid on u.s .debt , and eventually eliminate it entirely .although congress decided not to act on hamilton s proposal the tontine idea itself never died on american soil .insurance companies began issuing tontine insurance policies which are close cousins to de tonti s tontine to the public in the mid-19th century , and they became wildly popular . by the start of the 20th century , historians have documented that half of u.s .households owned a tontine insurance policy , which many used to support themselves through retirement .the longer one lived , the greater their payments .this was a personal hedge against longevity , with little risk exposure for the insurance company .sadly though , due to shenanigans and malfeasance on the part of company executives , the influential new york state insurance commission banned tontine insurance in the state , and by 1910 most other states followed .tontines have been illegal in the u.s . for over a century and most insurance executives have likely never heard of them .tontines not only have a fascinating history but have some basis in economic principles , in fact , adam smith himself , writing in the _ wealth of nations _, noted that tontines may be preferred to life annuities .we believe that a strong case can be made for overturning the current ban on tontine insurance _ allowing both tontine and annuities to co - exist as they did 320 years ago _ with suitable adjustments to alleviate problems encountered in the early 20th century .indeed , given the insurance industry s concern for longevity risk capacity , and its poor experience in managing the risk of long - dated fixed guarantees , one can argue that an ( optimal ) tontine annuity is a triple win " proposition for individuals , corporations and governments .see milevsky ( 2015 ) for further arguments along this line .it is worth noting that under the proposed ( eu ) solvency ii guidelines for insurer s capital as well as risk management , there is a renewed focus on _ total balance sheet _ risks .in particular , insurers will be required to hold _ more _ capital against market risk , credit risk and operational risk .in fact , a recently - released report moody s ( 2013 ) claims that _ solvency ratios will exhibit a more complex volatility under solvency ii than under solvency i , as both the available capital and the capital requirements will change with market conditions . _ according to many commentators this is likely to translate into higher consumer prices for insurance products with long - term maturities and guarantees . and , although this only applies to european companies ( at this point time ) , it is not unreasonable to conclude that in a global market , annuity loadings will increase , making ( participating ) tontine products relatively more appealing to price - sensitive consumers .moreover , perhaps a properly - designed tontine product could help alleviate the low levels of voluntary annuity purchases a.k.a . the annuity puzzle by gaming the behavioral biases and prejudices exhibited by retirees .the behavioral economics literature and advocates of cumulative prospect theory have argued that consumers make decisions based on more general _ value functions _ with personalized decision weights . among other testable hypotheses ,this leads to a preference for investments with ( highly ) skewed outcomes , even when the alternative is a product with the same expected present values .of course , whether the public and regulators can be convinced of these benefits remains to be seen , but a debate would be informative .we are not alone in this quest .indeed , during the last decade a number of companies around the world egged on by scholars and journalists have tried to resuscitate the tontine concept while trying to avoid the bans and taints .although the specific designs differ from proposal to proposal , all share the same idea we described in this paper : companies act as custodians and guarantee very little .this arrangement requires less capital which then translates into more affordable pricing for the consumer .once again the main qualitative insight from our model is that a properly - designed tontine could hold its own in the _ utility arena _, against an actuarially - fair life annuity and pose a real challenge to a loaded annuity .the next ( natural ) step would be to examine the robustness of our results in a full - blown stochastic mortality model . 99 alter , g. ( 1986 ) , how to bet on lives : a guide to life contingent contracts in early modern europe , _ research in economic history _ , vol .10 , pg . 1 - 53 .barberis , n.c .( 2013 ) , thirty years of prospect theory in economics : a review and assessment , _ the journal of economic perspectives _ ,27(1 ) , pg .173 - 196 bellhouse , d. r. ( 2011 ) , _ de moivre : setting the stage for classical probability and its applications _ , crc press , new york .bernard , c. , x. he , j .- a .yan and x. y. zhou ( 2013 ) , optimal insurance design under rank - dependent utility , _ mathematical finance _ , vol . 25 ( 1 ) , pg .154186 cannon , e. and i. tonks ( 2008 ) , _ annuity markets _ , oxford university press , uk .chancellor , e. ( 2001 ) , life long and prosper , _ the spectator _ , 24 march .chung , j. and g. tett ( 2007 ) , death and the salesmen : as people live longer , pension funds struggle to keep up , which is where a new , highly profitable market will come in one that bets on matters of life and death , _ the financial times _ , february 24 , page 26 .ciecka , j.e .( 2008a ) , the first mathematically correct life annuity valuation formula , _ journal of legal economics _15(1 ) , pg .59 - 63 .ciecka , j.e .( 2008b ) , edmond halley s life table and its uses , _ journal of legal economics _15(1 ) , pg . 65 - 74 .cooper , r. ( 1972 ) , _ an historical analysis of the tontine principle with emphasis on tontine and semi - tontine life insurance policies _ , s.s .huebner foundation for insurance education , university of pennsylvania .dai , m. , y.k .kwok and j. zong ( 2008 ) , guaranteed minimum withdrawal benefit in variable annuities , _ mathematical finance _18(3 ) , pg .595 - 611 dickson , p.g.m .( 1967 ) , _ the financial revolution in england a study in the development of public credit , 1688 - 1756_. published by macmillan donnelly , c. , m. guillen and j.p .nielsen ( 2013 ) , exchanging mortality for a cost , _ insurance : mathematics and economics _ ,52(1 ) , pg . 65 - 76 .donnelly , c. , m. guilln and j.p .nielsen ( 2014 ) , bringing cost transparency to the life annuity market , _ insurance : mathematics and economics _ ,56 ( 1 ) , pg . 14?27 .donnelly , c. ( 2015 ) actuarial fairness and solidarity in pooled annuity funds , _ astin bulletin _ , vol .. 49 - 74 elsgolc , l.d .( 2007 ) , _ calculus of variations _ ,dover publications , mineola , new york .finlaison , j. ( 1829 ) , _ report of john finlaison , actuary of the national debt , on the evidence and elementary facts on which the tables of life annuities are founded _ ,house of commons , london , uk .gelfand , i.m . and s.v .fomin ( 2000 ) , _ calculus of variations _ , translated by r.a .silverman , dover publications , mineola , new york .goldsticker , r. ( 2007 ) a mutual fund to yield annuity - like benefits , _ financial analysts journal _ , vol .63 ( 1 ) , pg .63 - 67 .hald , a. ( 2003 ) , _ history of probability and statistics and their applications before 1750 _ , wiley series in probability and statistics , john wiley & sons , new jersey homer , s. and r. sylla ( 2005 ) , _ a history of interest rates _ , 4th edition , john wiley and sons , new jersey .jennings , r. m. and a. p. trout ( 1982 ) , _ the tontine : from the reign of louis xiv to the french revolutionary era _ ,huebner foundation for insurance education , university of pennsylvania , 91 pages .jennings , r. m. , d. f. swanson , and a. p. trout ( 1988 ) alexander hamilton s tontine proposal , _ the william and mary quarterly _ , vol .45(1 ) , pg . 107 - 115 .kopf , e. w. ( 1927 ) the early history of the annuity , _ proceedings of the casualty actuarial society _ , vol .13(28 ) , pg . 225 - 266 .lewin , c.g .( 2003 ) , _ pensions and insurance before 1800 : a social history _ , tuckwell press , eastlothian , scotland .mckeever , k. ( 2009 ) , a short history of tontines , _ fordham journal of corporate and financial law _ , vol .15(2 ) , pg . 491 - 521 .milevsky , m.a .( 2014 ) , portfolio choice and longevity risk in the late 17th century : a re - examination of the first english tontine , _ financial history review _ , vol .21(3 ) , pg .225 - 258 milevsky , m.a .( 2015 ) , _ king william s tontine : why the retirement annuity of the future should resemble its past _ , cambridge university press , new york city .milevsky , m.a . andsalisbury ( 2015 ) , on the choice between tontines and annuities under stochastic and asymmetric mortality , _ manuscript in preparation _ moody s investor services ( 2013 ) , _european insurers : solvency ii - volatility of regulatory ratios could have broad implications for european insurers _ , retrieved may 2013 from www.moodys.com odonnell , t. ( 1936 ) , _ history of life insurance in its formative years ; compiled from approved sources _ , american conservation co. , chicago .pechter , k. ( 2007 ) , possible tontine revival raises worries , _ annuity market news _ , 1 may , sourcemedia .piggott , j. , e. a. valdez , and b. detzel ( 2005 ) , the simple analytics of a pooled annuity fund , _ the journal of risk and insurance _ , vol .72(3 ) , pg . 497 - 520 .pitacco , e. , m. denuit , s. haberman , and a. olivieri ( 2009 ) , _ modeling longevity dynamics for pensions and annuity business _ oxford university press , uk .poitras , g. ( 2000 ) , _ the early history of financial economics : 1478 - 1776 _ , edward elgar , cheltenham uk .poterba , j.m .( 2005 ) , annuities in early modern europe , in _ the origins of value : the financial innovations that created modern capital markets _ , edited by w. n. goetzmann and k. g. rouwenhorst .new york : oxford university press .promislow , s.d .( 2011 ) , _ fundamentals of actuarial mathematics _ , 2nd edition , john wiley & sons , united kingdom .qiao , c and m. sherris ( 2013 ) , managing systematic mortality risk with group self - pooling and annuitization schemes , _ journal of risk and insurance _ , vol .80(4 ) , pg . 949 - 974 .ransom , r. l. and r. sutch ( 1987 ) , tontine insurance and the armstrong investigation : a case of stifled innovation , 1868 - 1905 , _ the journal of economic history _ , vol .47(2 ) , pg . 379 - 390 .richter , a. , and f. weber ( 2011 ) , mortality - indexed annuities managing longevity risk via product design , _ north american actuarial journal _ ,15(2 ) , pg . 212 - 236 .rotemberg , j. j. ( 2009 ) , can a continuously - liquidating tontine ( or mutual inheritance fund ) succeed where immediate annuities have floundered ?, _ harvard business school : working paper _ rothschild , c. ( 2009 ) , adverse selection in annuity markets : evidence from the british life annuity act of 1808 , _ journal of public economics _ ,93(5 - 6 ) , pg .776 - 784 .sabin , m.j .( 2010 ) , fair tontine annuity , _ ssrn abstract # 1579932 _ stamos , m. z. ( 2008 ) , optimal consumption and portfolio choice for pooled annuity funds , _ insurance : mathematics and economics _ , vol .43 ( 1 ) , pg .56 - 68 .steele , j. m. ( 1997 ) , _ probability theory and combinatorial optimization _, cbms - nsf regional conference series in applied mathematics , vol 69 , siam , philadelphia , pennsylvania tonti , l. ( 1654 ) , _ edict of the king for the creation of the society of the royal tontine _ , translated by v. gasseau - dryer , published in _ history of actuarial science _ , volume v , edited by s. haberman and t.a .sibbett , published by william pickering , london , 1995 valdez , e. a. , j. piggott , and l. wang ( 2006 ) , demand and adverse selection in a pooled annuity fund , _ insurance : mathematics and economics _ , vol .39(2 ) , pg . 251 - 266 .weir , d. r. ( 1989 ) , tontines , public finance , and revolution in france and england , _ the journal of economic history _ , vol .49 ( 1 ) , pg .95 - 124 .yaari , m. e. ( 1965 ) , uncertain lifetime , life insurance and the theory of the consumer , _ the review of economic studies _ , vol .32(2 ) , pg . 137 - 150 .the bulk of this section consists of proofs of results from section [ properties ] to simplify notation , we use as few subscripts as possible , and write , , or , as long as the meaning is clear .we start with some calculations for the binomial distribution & = \sum_{k=0}^{n-1}\binom{n-1}{k}f(k+1)\big[kp^{k-1}(1-p)^{n - k-1}-(n - k-1)p^k(1-p)^{n - k-2}\big]\\ & = \sum_{k=1}^{n-1}\binom{n-1}{k}p^{k-1}(1-p)^{n - k-1}kf(k+1)\\ & \qquad\qquad - \sum_{k=0}^{n-2}\binom{n-1}{k+1}p^{k}(1-p)^{n - k-2}(k+1)f(k+1)\\ & = \frac{1}{p}\sum_{k=1}^{n-1}\binom{n-1}{k}p^k(1-p)^{n - k-1}k[f(k+1)-f(k)].\end{aligned}\ ] ] parts ( b ) and ( c ) are elementary calculations . to see ( a ) , apply lemma [ derivativeofmeans ] to obtain that =\frac{1}{n^{\gamma-1}}\big(e[n^{\gamma-1}]+p\cdot\frac{1}{p}e[(n-1)(n^{\gamma-1}-(n-1)^{\gamma-1})]\big)\\ = \frac{1}{n^{\gamma-1}}e[n^\gamma-(n-1)^\gamma]>0.\end{gathered}\ ] ] the first statement of ( d ) holds , because \to p(n = n)=p^{n-1} ] and increasing on . since and when , we see that for every , as required . to show part ( b ) it suffices to show that is decreasing in .this follows from ( a ) and ( d ) of lemma [ betabound ] , since .part ( c ) is implicit in the proof of lemma [ rmonotonicity ] , since the hypothesis of that result is not used in showing that initially decreases .numerical evidence ( not reported here ) suggests that there is a ( ) such that is decreasing in whenever . by lemma [ rmonotonicity ] , this would imply for these parameters , without any restriction on the underlying mortality distribution .however , for larger values , can in fact increase over a small range of s .for example , with , and , it increases over the range ] for each , so = \begin{cases } 1+e_0 , & a=0\\ e_1 , & a=1\\ \frac{kp(1-p)}{n^2}+e_2 , & a=2 \end{cases } \label{secondbound}\ ] ] where in each case . by azuma s inequalityagain , & = \int_0^\infty p\big(\big|\frac{n(p)}{n}-\mu\big|^3>q\big)\,dq = \int_0^\infty p\big(\big|\frac{b - kp}{n}\big|>q^{1/3}\big)\,dq\nonumber\\ & \le \int_0^\infty p\big(\big|\frac{b}{k}-p\big|>q^{1/3}\big)\,dq \le 2\int_0^\infty e^{-\frac12kq^{2/3}}\,dq \label{thirdbound}\\ & = \frac{6}{k^{3/2}}\int_0^\infty e^{-\frac12 z^2}z^2\,dz = \frac{3\sqrt{2\pi}}{k^{3/2}}.\nonumber\end{aligned}\ ] ] by taylor s theorem , =e\big[\mu^{\gamma-1}+(\gamma-1)\mu^{\gamma-2}\big(\frac{n}{n}-\mu\big ) + \frac{(\gamma-1)(\gamma-2)}{2}\mu^{\gamma-3}\big(\frac{n}{n}-\mu\big)^2\\ + \frac{(\gamma-1)(\gamma-2)(\gamma-3)}{6}\xi^{\gamma-4}\big(\frac{n}{n}-\mu\big)^3 , \big|\frac{n}{n}-\mu\big|\le\eta p\big]\end{gathered}\ ] ] where lies between and . applying , , and yields a constant such that if and then = \mu^{\gamma-1}+\frac{(\gamma-1)(\gamma-2)}{2}\mu^{\gamma-3}\frac{kp(1-p)}{n^2}+e_4,\ ] ] where .since it is now immediate that for these s and s , also = p^{\gamma-1}+\frac{\gamma(\gamma-1)(1-p)}{2n}p^{\gamma-2}+e_5=p^{\gamma-1}\big(1+\frac{\gamma(\gamma-1)(1-p)}{2np}+\frac{e_5}{p^{\gamma-1}}\big)\ ] ] where for some constant . by passing to some if necessary ,we obtain finally that \big)^{1/\gamma}\\ = p\big(1+\frac{(\gamma-1)(1-p)}{2np}+e_6\big ) = p\big(1-\frac{(\gamma-1)}{2n}\big)+\frac{\gamma-1}{2n}+pe_6 \label{fourthbound}\end{gathered}\ ] ] for and , where , and is a constant depending on and . to apply this , fix , set , and choose and to ensure that holds for both and . for any ,the inequality is equivalent to the following : set and .using , we have an upper bound of \big[f(t)(1-\frac{(\gamma_2 - 1)}{2n}\big)+g(t)\frac{\gamma_2 - 1}{2n}\big]+\frac{c_5}{n^{3/2}}\ ] ] for the left side of ( for some constant ) , provided .there is likewise a lower bound \big[f(t)(1-\frac{(\gamma_1 - 1)}{2n}\big)+g(t)\frac{\gamma_1 - 1}{2n}\big]-\frac{c_6}{n^{3/2}}\ ] ] for the right side ( for some constant ) , again provided .thus will hold for and , provided and .\ ] ] it is easily seen that , and it follows that on .it is therefore clear how to choose to ensure that the desired conclusion holds .as remarked earlier , the case follows immediately from ( e ) of lemma [ betabound ] . when , we have so \,dt ] increases with . by lemma[ reciprocalbound ] and lhpital s rule , ^{1/a } = e^{\lim_{a\downarrow 0}\frac{1}{a}\log e[(\frac{n}{n(p)})^a]}\\ & = e^{\lim_{a\downarrow 0}e[(\frac{n}{n(p)})^a\log(\frac{n}{n(p)})]/e[(\frac{n}{n(p)})^a ] } = e^{-e[\log(\frac{n(p)}{n})]}.\end{aligned}\ ] ] taking log s shows the result . since , is concave in , so lies below its tangents . therefore \le e\big[p^{\gamma-1}+(\gamma-1)p^{\gamma-2}\big(\frac{n}{n}-p\big)\big ] = p^{\gamma-1}+(\gamma-1)p^{\gamma-2}\frac{1-p}{n}.\ ] ] is also strictly concave in , so in the same way therefore by definition , and is also concave in , so as before for any one can derive ( using the second moment of now , as in the proof of theorem [ asymptotictheorem ] ) the asymptotic result that .but convergence turns out to be so slow that this precise asymptotic is of limited use .the slow convergence derives from the observation that , with gompertz mortality , the time till reaches grows only at rate .( for the same reason , numerical computations are best carried out using rather than . ) for example , with , , age , and gompertz parameters and , we obtain ; but for 10 , 100 , or 1000 we only have 0.2858 , 0.3377 , and 0.3671 ; in fact , even with tontine participants ( roughly the world s entire population ) we would only reach . asymptotics become more reliable if tontine and annuity payments are capped at some advanced age , as in theorem [ asymptotictheorem ] .the new loading now satisfies , where is the payout from the capped annuity . with the same parameters as above and capping at age 120 it still takes to give ( well approximating the value of ) .convergence is more rapid with smaller s : with capping at age 110 we only need to give ( approximating 0.3850 ) and for age 100 capping , even yields 0.2855 ( approximating 0.2897 ) . in table[ table08 ] we compare the welfare loss experienced by an individual with longevity risk aversion , if they participate in a natural tontine rather than an optimal one . to doso we calculate the ratio of the certainty equivalents of the two tontines .this represents the initial deposit into a natural tontine needed to provide the same utility as a ] which is well behaved as .the denominator integrand involves however , which can be large while , if .this means that for the natural tontine utility is unduly influenced by the possibility of living to highly advanced ages .even if there is only a single survivor , that survivor s payout will have dropped to quite low levels by the time it is actually improbable that anyone will live that long . andfor the negative consequences of the low payout dominate the small probability of surviving that long .capping tontine payouts at an advanced age , as in theorem [ asymptotictheorem ] , would enable comparisons for .for example , take age and let us compare with the certainty equivalent at that age from table [ table08 ] ( ie . with and ) .capping payouts at age 100 , we can achieve for even with , and for by going to . with the capping age raised to 110 and , it instead takes to achieve a comparable lora ( ) & payout age 65 & payout age 80 & payout age 95 + 0.5 & 7.565% & 5.446% & 1.200% + 1.0 & 7.520% & 5.435% & 1.268% + 1.5 & 7.482% & 5.428% & 1.324% + 2.0 & 7.447% & 5.423% & 1.374% + 4.0 & 7.324% & 5.410% & 1.541% + 9.0 & 7.081% & 5.394% & 1.847% + * survival * & % & .2% & .8% + + + lora & & & & & + 0.5 & 72.6 b.p .& 14.5 b.p . & 2.97 b.p . & 1.50 b.p . & 0.30 b.p .+ 1.0 & 129.8 b.p . & 27.4 b.p .& 5.74 b.p . & 2.92 b.p . &+ 1.5 & 182.4 b.p . & 39.8 b.p . & 8.45 b.p .& 4.31 b.p . & 0.89 b.p .+ 2.0 & 231.7 b.p .& 51.8 b.p . & 11.1 b.p . & 5.68 b.p . & 1.18 b.p .+ 3.0 & 323.1 b.p . & 75.1 b.p . &16.3 b.p . & 8.38 b.p . & 1.75 b.p .+ 9.0 & 753.6 b.p . &199.8 b.p . & 45.9 b.p . & 23.8 b.p . & 5.09 b.p .+ + + age & & & + 30 & 1.000018 & 1 & 1.000215 + 40 & 1.000026 & 1 & 1.000753 + 50 & 1.000041 & 1 & 1.001674 + 60 & 1.000067 & 1 & 1.003388 + 70 & 1.000118 & 1 & 1.003451 + 80 & 1.000225 & 1 & 1.009877 + + tontine dividends during the first few decades of retirement is relatively low and predictable for a pool size in the hundreds .the dividends increase exponentially at later ages and the 80% range is much wider as well . but this is not the only way to construct a tontine.,scaledwidth=90.0% ] utility starts off paying the exact same rate as a life annuity regardless of the number of participants in the tontine pool .but , for higher levels of longevity risk aversion and a relatively smaller tontine pool , the function starts off at a lower value and declines at a slower rate , scaledwidth=90.0% ] of the optimal tontine payout to the natural tontine payout .this is flat for since the natural tontine is optimal for logarithmic utility , but for rising we see increased reserving against the risk of living longer than expected.,scaledwidth=90.0% ] ) .they agree at , but the tontine pays more at time zero once loading is included .the utility of the loaded life annuity may be lower than that of the tontine , depending on the size of the loading ., scaledwidth=90.0% ] | tontines were once a popular type of mortality - linked investment pool . they promised enormous rewards to the last survivors at the expense of those died early . and , while this design _ appealed to the gambling instinct _ , it is a suboptimal way to generate retirement income . indeed , actuarially - fair life annuities making constant payments where the insurance company is exposed to longevity risk induce greater lifetime utility . however , tontines do not have to be structured the historical way , i.e. with a constant cash flow shared amongst a shrinking group of survivors . moreover , insurance companies do not sell actuarially - fair life annuities , in part due to aggregate longevity risk . we derive the tontine structure that maximizes lifetime utility . technically speaking we solve the euler - lagrange equation and examine its sensitivity to ( i. ) the size of the tontine pool , and ( ii . ) individual longevity risk aversion . we examine how the optimal tontine varies with and , and prove some qualitative theorems about the optimal payout . interestingly , lorenzo de tonti s original structure is optimal in the limit as longevity risk aversion . we define the _ natural tontine _ as the function for which the payout declines in exact proportion to the survival probabilities , which we show is near - optimal for all and . we conclude by comparing the utility of optimal tontines to the utility of loaded life annuities under reasonable demographic and economic conditions and find that the life annuity s advantage over the optimal tontine is minimal . in sum , this paper s contribution is to ( i. ) rekindle a discussion about a retirement income product that has been long neglected , and ( ii . ) leverage economic theory as well as tools from mathematical finance to design the next generation of tontine annuities . |
in modern astronomy , one is increasingly faced with the problem of analysing large , complicated and multidimensional data sets .such analyses typically include tasks such as : data description and interpretation , inference , pattern recognition , prediction , classification , compression , and many more .one way of performing such tasks is through the use of machine learning methods . for accessible accounts of machine learning and its use in astronomy ,see , for example , , and .moreover , machine learning software easily used for astronomy , such as the python - based astroml package , or c - based fast artificial neural network library ( fann ) have recently started to become available .two major categories of machine learning are : _ supervised learning _ and _ unsupervised learning_. in supervised learning , the goal is to infer a function from labeled training data , which consist of a set of training examples .each example has both ` properties ' and ` labels ' .the properties are known ` input ' quantities whose values are to be used to predict the values of the labels , which may be considered as ` output ' quantities .thus , the function to be inferred is the mapping from properties to labels .once learned , this mapping can be applied to datasets for which the values of the labels are not known .supervised learning is usually further subdivided into _ classification _ and _ regression_. in classification , the labels take discrete values , whereas in regression the labels are continuous . in astronomy , for example , using multifrequency observations of a supernova lightcurve ( its properties ) to determine its type ( e.g. ia , ib , ii , etc . )is a classification problem since the label ( supernova type ) is discrete ( see , e.g. , ) , whereas using the observations to determine ( say ) the energy output of the supernova explosion is a regression problem , since the label ( energy output ) is continuous .classification can also be used to obtain a distribution for an output value that would normally be treated as a regression problem .this is demonstrated by for measuring redshifts in cfhtlens .a particularly important recent application of regression supervised learning in astrophysics and cosmology ( and beyond ) is the acceleration of the statistical analysis of large data sets in the context of complicated models . in such analyses ,one typically performs many ( ) evaluations of the likelihood function describing the probability of obtaining the data for different sets of values of the model parameters . for some problems , in particular in cosmology ,each such function evaluation can take up to tens of seconds , making the analysis very computationally expensive . by performing regressionsupervised learning to infer and then replace the mapping between model parameters and likelihood value , once can reduce the computation required for each likelihood evaluation by several orders of magnitude , thereby vastly accelerating the analysis ( see , e.g. , ) . in unsupervised learning ,the data have no labels .more precisely , the quantities ( often termed ` observations ' ) associated with each data item are not divided into properties ( inputs ) and labels ( outputs ) .this lack of a ` causal structure ' , where the inputs are assumed to be at the beginning and outputs at the end of a causal chain , is the key difference from supervised learning . instead ,all the observations are considered to be at the end of a causal chain , which is assumed to begin with some set of ` latent ' ( or hidden ) variables .the aim of unsupervised learning is to infer the number and/or nature of these latent variables ( which may be discrete or continuous ) by finding similarities between the data items .this then enables one to summarize and explain key features of the dataset .the most common tasks in unsupervised learning include _ density estimation _ , _ clustering _ and _ dimensionality reduction_. indeed , in some cases ,dimensionality reduction can be used as a pre - processing step to supervised learning , since classification and regression can sometimes be performed in the reduced space more accurately than in the original space . as an astronomical example of unsupervised learning onemight wish to use multifrequency observations of the lightcurves of a set of supernovae to determine how many different types of supernovae are contained in the set ( a clustering task ) .alternatively , if the data set also includes the type of each supernova ( determined using spectroscopic observations ) , one might wish to determine which properties , or combination of properties , in the lightcurves are most important for determining their type photometrically ( a dimensionality reduction task ) .this reduced set of property combinations could then be used instead of the original lightcurve data to perform the supernovae classification or regression analyses mentioned above .an intuitive and well - established approach to machine learning , both supervised and unsupervised , is based on the use of artificial neural networks ( nns ) , which are loosely inspired by the structure and functional aspects of a brain .they consist of a group of interconnected nodes , each of which processes information that it receives and then passes this product on to other nodes via weighted connections . in this way ,nns constitute a non - linear statistical data modeling tool , which may be used to represent complex relationships between a set of inputs and outputs , to find patterns in data , or to capture the statistical structure in an unknown joint probability distribution between observed variables .in general , the structure of a nn can be arbitrary , but many machine learning applications can be performed using only feed - forward nns . for such networksthe structure is directed : an input layer of nodes passes information to an output layer via zero , one , or many ` hidden ' layers in between .such a network is able to ` learn ' a mapping between inputs and outputs , given a set of training data , and can then make predictions of the outputs for new input data .moreover , a universal approximation theorem assures us that we can accurately and precisely approximate the mapping with a nn of a given form .a useful introduction to nns can be found in . in astronomy ,feed - forward nns have been applied to various machine learning problems for over 20 years ( see , e.g. , ) .nonetheless , their more widespread use in astronomy has been limited by the difficulty associated with standard techniques , such as backpropagation , in training networks having many nodes and/or numerous hidden layers ( i.e. ` large ' and/or ` deep ' networks ) , which are often necessary to model the complicated mappings between the numerous inputs and outputs in modern astronomical applications . in this paper, we therefore present the first public release of skynet : an efficient and robust neural network training tool that is able to train large and/or deep feed - forward networks .skynet is able to achieve this by using a combination of the ` pre - training ' method of to obtain a set of network weights close to a good optimum of the training objective function , followed by further optimisation of the weights using a regularised variant of newton s method based on that developed for the memsys software package . in particular ,second - order derivative information is used to improve convergence , but without the need to evaluate or store the full hessian matrix , by using a fast approximate method to calculate hessian - vector products .skynet is implemented in the standard ansi c programming language and parallelised using mpi .we also note that skynet has already been combined with multinest , to produce the blind accelerated multimodal bayesian inference ( bambi ) package , which is a generic and completely automated tool for greatly accelerating bayesian inference problems ( by up to a factor of ; see , e.g. , ) .multinest is a fully - parallelised implementation of nested sampling , extended to handle multimodal and highly - degenerate distributions . in most astrophysical ( and particle physics )bayesian inference problems , multinest typically reduces the number of likelihood evaluations required by an order of magnitude or more , compared to standard mcmc methods , but bambi achieves further substantial gains by speeding up the evaluation of the likelihood itself by replacing it with a trained regression neural network .bambi proceeds by first using multinest to obtain a specified number of new samples from the model parameter space , and then uses these as input to skynet to train a network on the likelihood function .after convergence to the optimal weights , the network s ability to predict likelihood values to within a specified tolerance level is tested .if it fails , sampling continues using the original likelihood until enough new samples have been made for training to be performed again . once a network is trained that is sufficiently accurate , its predictions are used in place of the original likelihood function for future samples for multinest . on typical problems in cosmology , for example , using the network reduces the likelihood evaluation time from seconds to less than a millisecond , allowing multinest to complete the analysis much more rapidly . as a bonus , at the end of the analysis the user also obtains a network that is trained to provide more likelihood evaluations near the peak if needed , or in subsequent analyses . with the public release of skynet, we now also make bambi publically available .the structure of this paper is as follows . in section [ sec : nnstruct ]we describe the general structure of feed - forward nns , including a particular special case of such networks , called autoencoders , which may be used for performing non - linear dimensionality reduction . in section[ sec : nntrain ] we present the procedures used by skynet to train networks of these types .skynet is then applied to some toy machine learning examples in section [ sec : nntoyex ] , including a regression task , a classification task , and a dimensionality reduction task using autoencoders .we also apply skynet to the problem of classifying images of handwritten digits from the mnist database , which is a widely - used benchmarking test of machine learning algorithms .the application of skynet to astronomical machine learning examples is presented in section [ sec : nnex_astro ] , including : a regression task to determine the projected ellipticity of a galaxy from blurred and noisy images of the galaxy and of a field star ; a classification task , based on a simulated gamma - ray burst detection pipeline for the swift satellite , to determine if a grb with given source parameters will be detected ; and a dimensionality reduction task using autoencoders to compress and denoise galaxy images .finally , we present our conclusions in section [ sec : nndiscuss ] .a multilayer perceptron feed - forward neural network is the simplest type of network and consists of ordered layers of perceptron nodes that pass scalar values from one layer to the next .the perceptron is the simplest kind of node , and maps an input vector to a scalar output via where and are the parameters of the perceptron , called the ` weights ' and ` bias ' , respectively . for a 3-layer nn , which consists of an input layer , a hidden layer , and an output layer , as shown in fig .[ fig : neuralnet ] , the outputs of the nodes in the hidden and output layers are given by the following equations : where runs over input nodes , runs over hidden nodes , and runs over output nodes .the functions and are called activation functions and must be smooth and monotonic for our purposes .we use ( sigmoid ) and ; the non - linearity of is essential to allowing the network to model non - linear functions . to expand the nn to include more hidden layers , we iterate for each connection from one hidden layer to the next , each time using the same activation function , . the final hidden layer will connect to the output layer using the relation .the weights and biases are the values we wish to determine in our training ( described in section [ sec : nntrain ] ) .as they vary , a huge range of non - linear mappings from inputs to outputs is possible .in fact , a universal approximation theorem states that a nn with three or more layers can approximate any continuous function to some given accuracy , as long as the activation function is locally bounded , piecewise continuous , and not a polynomial ( hence our use of sigmoid , although other functions would work just as well , such as ) . by increasing the number of hidden nodes, one can achieve more accuracy at the risk of overfitting to our training data .other activation functions have also been proposed , such as the rectified linear function wherein or the ` softsign ' function where .it has been argued that the former removes the need for pre - training ( as described in section [ sec : nnpretrain ] ) and serves as a better model of biological neurons .the ` softsign ' is similar to , but with slower approach to the asymptotes of ( quadratic rather than exponential ) .autoencoders are a specific type of feed - forward neural network containing one or more hidden layers , where the inputs are mapped to themselves , i.e. the network is trained to approximate the identity operation ; when more than one hidden layer is used this is typically referred to as a ` deep ' autoencoder .such networks typically contain several hidden layers and are symmetric about a central layer containing fewer nodes than there are inputs ( or outputs ) .a basic diagram of an autoencoder is shown in fig . [fig : autoencoder ] , in which the three inputs are mapped to themselves via three symmetrically - arranged hidden layers , with two nodes in the central layer . ) defines the weight matrices and . ]an autoencoder can thus be considered as two half - networks , with one part mapping the inputs to the central layer and the second part mapping the central layer values to the outputs ( which approximate as closely as possible the original inputs ) .these two parts are called the ` encoder ' and ` decoder ' , respectively , and map either to or from a reduced set of ` feature variables ' embodied in the nodes of the central layer ( denoted by and in fig .[ fig : autoencoder ] ) .these variables are , in general , non - linear functions of the original input variables .one can determine this dependence for each feature variable in turn simply by decoding , , and so on , as the corresponding value is varied ; in this way , for each feature variable , one obtains a curve in the original data space .conversely , the collection of feature values in the central layer might reasonably be termed the feature vector of the input data .autoencoders therefore provide a very intuitive approach to non - linear dimensionality reduction and constitute a natural generalisation of linear methods such as principal component analysis ( pca ) and independent component analysis ( ica ) , which are widely used in astronomy .indeed , an antoencoder with a single hidden layer and linear activation functions may be shown to be identical to pca .this topic is explored further in section [ sec : nntoy_ae ] .it is worth noting that encoding from input data to feature variables can also be useful in performing clustering tasks ; this is illustrated in section [ sec : mnist ] .autoencoders are , however , notoriously difficult to train , since the objective function contains a broad local maximum where each output is the average value of the inputs .nonetheless , this difficulty can be overcome by the use of pre - training methods , as discussed in section [ sec : nnpretrain ] .an important choice when training a nn is the number of nodes in its hidden layers .the optimal number and organisation into one or more layers has a complicated dependence on the number of training data points , the number of inputs and outputs , and the complexity of the function to be trained .choosing too few nodes will mean that the nn is unable to learn the relationship to the highest possible accuracy ; choosing too many will increase the risk of overfitting to the training data and will also slow down the training process . using empirical evidence and theoretical considerations , it has been suggested that the optimal architecture for approximating a continuous function is one hidden layer containing nodes , where is the number of input nodes . also find empirical support for this suggestion .such a choice allows the network to model the form of the mapping function without unnecessary work . in practice, it can be better to over - estimate ( slightly ) the number of hidden nodes required . as described in section [ sec : nntrain ] , skynet performs basic checks to prevent over - fitting , and the additional training time associated with having more hidden nodes is not a large penalty if an optimal network can be obtained in an early attempt . in any case ,given a particular problem , the optimal network structure , both in terms of the number of hidden nodes and how they are distributed into layers , can be determined by comparing the correlation and error squared of different trained nns ; this is illustrated in section [ sec : nntoyex ] .in training a nn , we wish to find the optimal set of network weights and biases that maximise the accuracy of the predicted outputs . however , we must be careful to avoid overfitting to our training data , which may lead to inaccurate predictions from inputs on which the network has not been trained .the set of training data inputs and outputs ( or ` targets ' ) , , is provided by the user ( where counts training items ) .approximately per cent should be used for actual nn training and the remainder retained as a validation set that will be used to determine convergence and to avoid overfitting .this ratio of 3:1 gives plenty of information for training but still leaves a representative subset of the data for checks to be made .it is prudent to ` whiten ' the data before training a network .whitening normalises the input and/or output values , so that it easier to train a network starting from initial weights that are small and centred on zero .the network weights in the first and last layers can then be ` unwhitened ' after training so that the network will be able to perform the mapping from original inputs to outputs .standard whitening transforms each input to a standard distribution by subtracting the mean and dividing by the standard deviation over all elements in the training data , such that [ eq : whitening1 ] an alternative whitening transform is also commonly used , wherein all values are scaled and shifted into the interval ] ) . for the true outputs , all are zero except for the correct output which has a value of unity . for classification networks, the hyper - parameters do not appear in the log - likelihood .the training of the nn can be started from some random initial state , or from a state determined from a ` pre - training ' procedure discussed below . in the former case ,the network training begins by setting random values for the network parameters , sampled from a normal distribution with zero mean and variance of ( this value can be modified by the user ) . in the latter case, skynet makes use of the pre - training approach developed by , which obtains a set of network weights and biases close to a good solution of the network objective function .this method was originally devised with autoencoders in mind and is based on the model of restricted boltzmann machines ( rbms ) .an rbm is a generative model that can learn a probability distribution over a set of inputs .it consists of a layer of input nodes and a layer of hidden nodes , as shown in figure [ fig : rbm ] . in this case , the map from the inputs to the hidden layer and then back is treated symmetrically and the weights are adjusted through a number of ` epochs ' , gradually reducing the reproduction error . to model an autoencoder , rbmsare ` stacked ' , with each rbm s hidden layer being the input for the next .the initial case is the nn s inputs to the first hidden layer ; this is repeated for the first to second hidden layer and so on until the central layer is reached .the network weights can then be ` unfolded ' by using the transpose for the symmetric connections in the decoding half to provide a decent starting point for the full training to begin .this is shown in fig .[ fig : autoencoder ] , where the and weights matrices are defined by pre - training . visible nodes and hidden nodes .bias nodes are not shown .image courtesy wikimedia commons . ]the training is then performed using _ contrastive divergence _ .this procedure can be summarised in the following steps , where sampling indicates setting the value of the node to with the probability calculated and otherwise . 1 .take a training sample and compute the probabilities of the hidden nodes ( their values using a sigmoid activation function ) and sample a hidden vector from this distribution .2 . let , where is used to indicate the outer product .3 . using ,compute the probabilties of the visible nodes and sample from this distribution .resample the hidden vector from this to obtain .4 . let . 5 . for some learning rate .more details can be found in and has useful diagrams and explanations .this pre - training approach can also be used for more general feed - forward networks .all layers of weights , except for the final one that connects the last hidden layer to the outputs , are pre - trained as if they were the first half of a symmetric autoencoder .however , the network weights are not unfolded ; instead the final layer of weights is initialised randomly as would have been done without pre - training . in this way, the network ` learns the inputs ' before mapping to a set of outputs .this has been shown to greatly reduce the training time on multiple problems by .we note that when an autoencoder is pre - trained , the activation function to the central hidden layer is made linear and the activation function from the final hidden layer to the outputs is made sigmoidal .general feed - forward networks that are pre - trained continue to use the original activation functions .both of these are simply the default settings and the user has the freedom to alter them to suit their specific problem .once the initial set of network parameters have been obtained , either by assigning them randomly or through pre - training , the network is then trained ( further ) by iterative optimisation of the objective function .first , initial values of the hyperparameters ( for regression networks ) and are set .the values are set by the user and can be set on either the true output values themselves or on their whitened values ( as defined in section [ sec : nntrain_whiten ] ) .the only difference between these two settings is the magnitude of the error used .the algorithm then calculates a large initial estimate for , where is the total number of weights and biases ( nn parameters ) and is a rate set by the user ( , default ) that defines the size of the ` confidence region ' for the gradient .this expression for sets larger regularisation ( or ` damping ' ) when the magnitude of the gradient of the likelihood is larger .this relates the amount of ` smoothing ' required to the steepness of the function being smoothed .the rate factor in the denominator allows us to increase the damping for smaller confidence regions on the value of the gradient .this results in smaller , more conservative steps that are more likely to result in an increase in the function value but results in more steps being required to reach the optimal weights .nn training then proceeds using an adapted form of a truncated newton ( or ` hessian - free ' ) optimisation algorithm as described below , to calculate the step , , that should be taken at each iteration .following each such step , adjustments to and may be made before another step is calculated .first , can be updated by multiplying it by a value such that .this serves to assure that at convergence , the value equals the number of unconstrained data points of the problem .similarly , is then updated such that the probability is maximised for the current set of nn parameters .these procedures are described in detail by ( * ? ? ?2.3 & 2.6 ) and ( * ? ? ?3.6 & appendix b ) . to obtain the step at each iteration, we first note that one may approximate a general function up to second - order in its taylor expansion by where is the gradient and is the hessian matrix of second derivatives , both evaluated at . for our purposes , the function is the log - posterior distribution of the nn parameters and hence represents a gaussian approximation to the posterior .the hessian of the log - posterior is the regularised ( ` damped ' ) hessian of the log - likelihood function , where the prior , whose magnitude is set by , provides the regularisation .if we define the hessian matrix of the log - likelihood as , then , where is the identity matrix . the regularisation parameter can be interpreted as controlling the level of ` conservatism ' in the gaussian approximation to the posterior . in particular, regularisation helps prevent the optimisation becoming trapped in small local maxima by smoothing out the function being explored .it also aids in reducing the region of confidence for the gradient information which will make it less likely that a step results in a worse set of parameters .ideally , we seek a step , such that . using the approximation ,one thus requires in the standard newton s method of optimisation one simply solves this equation directly for to obtain in principle , iterating this stepping procedure will eventually bring us to a local maximum of .moreover , newton s method has the important property of being scale - invariant , namely its behaviour is unchanged under any linear rescaling of the parameters .methods without this property often have problems optimising poorly scaled parameters .there are , however , some major practical difficulties with the standard newton s method .first , the hessian of the log - likelihood is not guaranteed to be positive semi - definite .thus , even after the addition of the damping term derived from the log - prior , the full hessian of the log - posterior may also not be invertible .second , even if is invertible , the inversion is prohibitively expensive if the number of parameters is large , as is the case even for modestly - sized neural networks . to address the first issue, we replace the hessian with a form of gauss newton approximation , which is guaranteed to be positive semi - definite and can be defined both for the regression likelihood and the classification likelihood , respectively . in particular , the approximation used differs from the classical gauss newton matrix in that it retains some second derivative information .second , to avoid the prohibitive expense of calculating the inverse in , we instead solve ( with replaced by in ) for iteratively using a conjugate - gradient algorithm , which requires only matrix - vector products for some vector .one can avoid even the computational burden of calculating and storing the hessian . in principle, products of the form can be easily computed using finite differences at the cost of a single extra gradient evaluation using the identity this approach is , however , subject to numerical problems .therefore , we instead calculate products using a stable and efficient procedure applicable to nns .this involves an additional forward and backward pass through the network beyond the initial ones required for a gradient calculation .the combination of all the above methods makes practical the use of second - order derivative information even for large networks and significantly improves the rate of convergence of nn training over standard backpropagation methods .it has been noted that this method for quasi - newton second - order descent is equivalent to the first - order ` natural gradient ' by . following each iteration of the optmisation algorithm ,the posterior , likelihood , correlation , and error squared values are calculated both for the training data and for the validation data ( which were not used in calculating the steps in the optimisation ) .the correlation of the network outputs is defined for each output as where and are the means of these output variables over all the training data ; the functional dependencies of have been dropped for brevity .the correlation provides a relative measure of how well the predicted outputs match the true ones . in practice, the correlations from each output can be averaged together to give an average correlation for the network s predictions .the average error - squared of the network outputs is defined by ^ 2 } , \label{eq : errsqr_defn}\ ] ] and is complementary to their correlation , since it is an absolute measure of accuracy .as one might expect , as the optimisation proceeds , there is a steady increase in the values of the posterior , likelihood , correlation , and negative of the error squared , evaluated both for the training and validation data .eventually , however , the algorithm will begin to overfit , resulting in the continued increase of these quantities when evaluated on the training data , but a decrease in them when evaluated on the validation data .this divergence in behaviour is taken as indicating that the algorithm has converged and the optimisation in terminated .the user may choose which of the four quantities listed above is used to determine convergence , although the default is to use the error squared , since it does not include the hyperparameters and in its calculation and is less prone to problems with zeros than the correlation .we note that the correlation and the error - squared functions discussed above also provide quantitative measures with which to compare the performance of different network architectures , both in terms of the number of hidden nodes and how they are distributed into layers . as network size and complexityis increased , a point will be reached at which minimal or no further gains may be achieved in increasing correlation or reducing error - squared .therefore , any network architecture that can achieve this peak performance is equally well - suited . in practice ,we will wish to find the smallest or simplest network that does so as this minimizes the risk of overfitting and the time required for training .after training a network , in particular a regression network , one may want to calculate the accuracy of the network s predicted outputs .a computationally cheap method of estimating this was suggested by , whereby one adds gaussian noise to the true outputs of the training data and trains a new network on this noisy data . after performing many realisations, the networks predictions will average to the predictions in the absence of the added noise .moreover , the standard deviation of their predictions will provide a good estimate of the accuracy of the original network s predictions . since one can train the new networks using the original trained network as a starting point ,the re - training on noisy data is very fast .additionally , evaluating the ensemble of predictions to measure the accuracy is not very computationally intensive as network evaluations are simple to perform and can be done in less than a millisecond .explicitly , the steps of this method are : 1 . start with the converged network with parameters , trained on the original data set .estimate the noise on the residuals using ^ 2/k ] , for which we evaluate the ramped sinc function , and then add gaussian noise with zero mean and a standard deviation of .the addition of noise makes the regression problem more difficult and prevents any exact solution being possible . to perform the regression , the data items are divided randomly into items for training and for validation . for this simple problem, we use a network with a single hidden layer containing nodes ( we denote the full network by ) , and we whiten the input and output data using . the network was not pre - trained .the optimal value for is determined by comparing the correlation and error - squared for networks with different numbers of hidden nodes .these results are presented in fig .[ fig : sincevidence ] , which shows that the correlation increases and the error - squared decreases until we reach hidden nodes , after which both measures level off .obtained from converged nns with architecture for the ramped sinc function regression problem.,height=215 ] thus , adding additional nodes beyond this number does not improve the accuracy of the network .for the network with hidden nodes , we obtain a correlation of greater than per cent ; a comparison of the true and predicted outputs in this case is shown in figure [ fig : sincplot ] .on the training data ( left ) and validation data ( right ) for the ramped sinc function regression problem . ]we now consider a toy classification problem based on the three - way classification data set created by radford neal for testing his own algorithms for nn training . in this dataset , each of four variables , , , and is drawn times from the standard uniform distribution $ ] .if the two - dimensional euclidean distance between and is less than , the point is placed in class ; otherwise , if , the class was set to ; and if neither of these conditions is true , the class was set to . note that the values of and play no part in the classification .gaussian noise with zero mean and standard deviation is then added to the input values .approximately percent of the data was used for training and the remaining per cent for validation .we again use a network with a single hidden layer containing nodes , and we whiten the input and output data using . the network was not pre - trained .the full network thus has the architecture , where the three output nodes give the probabilities ( which sum to unity ) that the input data belong to class 0 , 1 , or 2 , respectively .the final class assigned is that having the largest probability . the optimal value for is again determined by comparing the correlation and error - squared for networks with different numbers of hidden nodes .these results are shown in fig .[ fig : classprobstruct ] , from which we see that the correlation increases and the error - squared decreases until we reach hidden nodes , after which both measures level off . for the network with hidden nodes ,a total of per cent of training data points and per cent of validation points were correctly classified .a summary of the classification results for this network is given table [ tab : cdatatrain ] ..classification results for the converged nn with architecture for the neal data set . [ cols="^,^,^,^,^,^ " , ] these results show that the extra information given to the regression networks trained on feature values from the autoencoder acted as a disadvantage in predicting the galaxy ellipticities . for networks trained on or features , however , the accuracies of the predicted ellipticities were better even than those obtained using the full original images in some cases .this demonstrates the power of being able to eliminate redundant information and noise , and thereby improve the accuracy of the analysis .we also observe that adding unnecessary complexity to the nn structure makes it more difficult for the algorithm to find the global maximum .this same method for dimensionality reduction which also eliminates noise before performing measurements can clearly be applied to a wide range of other astronomical applications .examples include classification of supernovae by type , or measurements of galaxies and stars by their spectra .we have described an efficient and robust neural network training algorithm , called skynet , which we have now made freely available for academic purposes .this generic tool is capable of training large and deep feed - forward networks , including autoencoders , and may be applied to supervised and unsupervised machine learning tasks in astronomy , such as regression , classification , density estimation , clustering and dimensionality reduction .skynet employs a combination of ( optional ) pre - training followed by iterative refinement of the network parameters using a regularised variant of newton s optimisation algorithm that incorporates second - order derivative information without the need even to compute or store the hessian matrix .linear and sigmoidal activation functions are provided for the user to choose between .skynet adopts convergence criteria that naturally prevent overfitting , and it also includes a fast algorithm for estimating the accuracy of network outputs .we first demonstrate the capabilities of skynet on toy examples of regression , classification , and dimensionality reduction using autoencoder networks , and then apply it to the classic machine learning problem of handwriting classification for determining digits from the mnist database . in an astronomical context , skynet is applied to : the regression problem of measuring the ellipticity of noisy and convolved galaxy images in the mapping dark matter challenge ; the classification problem of identifying gamma - ray bursters that are detectable by the swift satellite ; and the dimensionality reduction problem of compressing and denoising images of galaxies . in each case, the straightforward use of skynet produces networks that perform the desired task quickly and accurately , and typically achieve results that are competitive with machine learning approaches that have been tailored to the required task .future development of skynet will expand upon many of the current features and introduce new ones .we are working to include more activation functions ( e.g. , softsign , and rectified linear ) , pooling of nodes , convolutional nns , diversity in outputs ( i.e. mixing regression and classification ) , and more robust support of recursive nns .this is all in addition to further improving the speed and efficiency of the training algorithm itself .however , skynet in its current state is already a useful tool for performing machine learning in astronomy .the authors thank john skilling for providing very useful advice in the early stages of algorithm development .we also thank amy lien for providing the data used in seciton [ sec : grb ] .this work utilized three different high - performance computing facilities at different times : initial work was performed on cosmos viii , an sgi altix uv1000 supercomputer , funded by sgi / intel , hefce and pparc , and the authors thank andrey kaliazin for assistance ; early work also utilized the darwin supercomputer of the university of cambridge high performance computing service ( ` http://www.hpc.cam.ac.uk/ ` ) , provided by dell inc . using strategic research infrastructure funding from the higher education funding council for england; later work utilised the discover system of the nasa center for climate simulation at nasa goddard space flight center .pg is currently supported by a nasa postdoctoral fellowship from the oak ridge associated universities and completed a portion of this work while funded by a gates cambridge scholarship at the university of cambridge .ff is supported by a research fellowship from the leverhulme and newton trusts . 1 andreon s. , gargiulo g. , longo g. , tagliaferri r. , & capuano n. , 1999 , arxiv : astro - ph/9906099 andreon s. , gargiulo g. , longo g. , tagliaferri r. , & capuano n. , 2000 , mnras , 319 , 700716 auld t. , bridges m. , hobson m.p ., gull s.f . , 2008 , mnras , 376 , l11 auld t. , bridges m. , hobson m.p . , 2008 , mnras , 387 , 1575 ball n.m ., brunner r.j . , 2010 , int. j. mod ., 19 , 1049 bergstra j. , desjardins g. , lamblin p. , & bengio y. , 2009 , technical report 1337 , dpartement dinformatique et de recherche oprationnelle , universit de montral .bertin e. , arnouts s. , 1996 , a&as supplement , 117 , 393 bridges m. , cranmer k. , feroz f. , hobson m.p . , ruiz de austri r. , trotta r. , 2011 , jhep , 03 , 012 bonnett c. , 2013 , arxiv:1312.1287 [ astro-ph.co ] carreira - perpignan m. a. & hinton . g. e. , 2005 , proceedings of the tenth international workshop on artificial intelligence and statistics , eds . cowell r. g. & ghahramani z. , 3340 ciresan d. c. , meier u. , gambardella l. m. , & schmidhuber j. , 2010 , neural comput . , 22 , 32073220 cybenko g. , 1989 , mathematics of control , signals , and systems , 2 , 303314 erhan d. et al . , 2010 ,journal of machine learning research , 11 , 625660 fawcett t. , 2006 , pattern recogn .lett . , 27 , 861 fendt w.a . ,wandelt b.d . , 2007 , apj , 654 , 2 feroz f. , hobson m.p . , 2008 , mnras , 384 , 449 feroz f. , hobson m.p . ,bridges m. , 2009 , mnras , 398 , 1601 feroz f. , hobson m. p. , cameron e. , & pettitt a. n. , 2013 , arxiv:1306.2144 [ astro-ph.im ] feroz f. , marshall p. j. , hobson m. p. , 2008 ,[ astro - ph ] fynbo j. et al . , 2009 ,apjs , 185 , 526 gehrels n. et al . , 2004 ,apj , 611 , 1005 geva s. , sitte j. , ieee , 3 , 621 glorot x. & bengio y , 2010 , proceedings of the thirteenth international conference on artificial intelligence and statistics , journal of machine learning research , eds .teh y. w. & titterington m. , 249256 glorot x. , bordes a. , & bengio y. , 2011 , proceedings of the fourteenth international conference on artificial intelligence and statistics , journal of machine learning research , eds .gordon g. & dunson d. , 315323 graff p. , feroz f. , hobson m.p ., lasenby a.n . , 2012 , mnras , 421 , 169 gull s.f .& skilling j. , 1999 , quantified maximum entropy : memsys 5 users manual .maximum entropy data consultants ltd .edmunds , suffolk , uk . ` http://www.maxent.co.uk/ ` hinton g.e . ,osindero s. , & teh y .- w . , 2006 , neural comput . , 18 , 15271554 hinton g.e . &salakhutdinov r.r . , 2006 ,science , 313 , 504 - 507 hobson m. p. , jones a. w. , lasenby a. n. , & bouchet f. r. , 1998 , mnras , 300 , 129 hornik k. , stinchcombe m. & white h. , 1990 , neural networks , 3 , 359 hyvrinen a. , oja e. , 2000 , neural networks , 13 , 411 karpenka n.v . , feroz f. , hobson m.p . , 2013 , mnras , 429 , 12781285 kendall m.g ., 1957 , a course in multivariate analysis .griffin , london kitching t. et al ., 2011 , annals of applied statistics , 5 , 22312263 kitching t. et al . , 2012 , new astronomy reviews , submitted lecun y. , bottou l. , bengio y. , & haffner p. , 1998, proc . of the ieee , 86 , 22782324 lien a. , sakamoto t. , gehrels n. , palmer d. , graziani c. , 2012 , proceedings of the international astronomical union , 279 , 347 longo g. , tagliaferri r. , & andreon s. , 2001 , mining the sky : proceedings of the mpa / eso / mpe workshop , eds .banday a. j. , zaroubi s. , bartelmann m. , 379385 mackay d.j.c ., 1992 , neural computation , 4 , 415447 mackay d.j.c . , 1995 ,network : computation in neural systems , 6 , 469 mackay d.j.c , 2003 , information theory , inference , and learning algorithms .cambridge univ . press . ` www.inference.phy.cam.ac.uk/mackay/itila/ ` mandic d. , chambers j. , 2001 , recurrent neural networks for prediction : learning algorithms , architectures and stability .wiley , new york .martens j. , 2010 , in frnkranz j. , joachims t. , eds , proc .machine learning .omnipress , haifa , p. 735 murtagh f. , 1991 , neural comput . , 2 , 183 pascanu r. & bengio y. , 2013 , arxiv:1301.3584 [ cs.lg ] pearlmutter b.a . , 1994 , neural comput . , 6 , 147 sanger t.d . ,1989 , neural networks , 2 , 459 schraudolph n.n . , 2002 , neural comput ., 14 , 1723 serra - ricart m. , calbet x. , garrido l. , & gaitan v. , 1993 , aj , 106 , 1685 skilling j. , 2004 , aip conference series , 735 , 395 tagliaferri r. et al . , 2003a , neural networks , 16 , 297 tagliaferri r. , longo g. , andreon s. , capozziello s. , donalek c. , & giordano g. , 2003b , neural nets : 14th italian workshop on neural nets , eds .apolloni b. , marinaro m. , & tagliaferri r. , 226234 wanderman d. , piran t. , 2010 , mnras , 406 , 1944 way m.j ., scargle j.d . , ali k.m . ,srivastava a.n . , 2012 ,advances in machine learning and data mining for astronomy .crc press . | we present the first public release of our generic neural network training algorithm , called skynet . this efficient and robust machine learning tool is able to train large and deep feed - forward neural networks , including autoencoders , for use in a wide range of supervised and unsupervised learning applications , such as regression , classification , density estimation , clustering and dimensionality reduction . skynet uses a ` pre - training ' method to obtain a set of network parameters that has empirically been shown to be close to a good solution , followed by further optimisation using a regularised variant of newton s method , where the level of regularisation is determined and adjusted automatically ; the latter uses second - order derivative information to improve convergence , but without the need to evaluate or store the full hessian matrix , by using a fast approximate method to calculate hessian - vector products . this combination of methods allows for the training of complicated networks that are difficult to optimise using standard backpropagation techniques . skynet employs convergence criteria that naturally prevent overfitting , and also includes a fast algorithm for estimating the accuracy of network outputs . the utility and flexibility of skynet are demonstrated by application to a number of toy problems , and to astronomical problems focusing on the recovery of structure from blurred and noisy images , the identification of gamma - ray bursters , and the compression and denoising of galaxy images . the skynet software , which is implemented in standard ansi c and fully parallelised using mpi , is available at http://www.mrao.cam.ac.uk / software / skynet/. [ firstpage ] methods : data analysis methods : statistical |
volatility clustering , evaluated through slowly decaying auto - correlations , hurst effect or noise for absolute returns , is a characteristic property of most financial assets return time series .statistical analysis alone is not able to provide a definite answer for the presence or absence of long range dependence phenomenon in stock returns or volatility , unless economic mechanisms are proposed to understand the origin of such phenomena .whether results of statistical analysis correspond to long range dependence is a difficult question subject to an ongoing statistical debate .the agent based economic models as well as stochastic models exhibiting long range dependence phenomenon in volatility or trading volume are of grate interest and remain an active topic of research .the properties of stochastic multiplicative point processes have been investigated analytically and numerically and the formula for the power spectrum has been derived , later the model has been related with the general form of the multiplicative stochastic differential equation .the extensive empirical analysis of the financial market data , supporting the idea that the long - range volatility correlations arise from trading activity , provides valuable background for further development of the long - ranged memory stochastic models .the power law behaviour of the auto - regressive conditional duration process based on the random multiplicative process and its special case the self - modulation process , exhibiting fluctuations , supported the idea of stochastic modelling with a power law probability density function ( pdf ) and long memory .a stochastic model of trading activity based on an stochastic differential equation ( sde ) driven poisson - like process has been already presented in . in the paper proposed a double stochastic model , which generates time series of the return with two power law statistics , i.e. , the pdf and the power spectral density of absolute return , reproducing the empirical data for the one - minute trading return in the nyse . in this contributionwe analyze empirical data from vilnius stock exchange ( vse ) in comparison with nyse and stochastic model proposed in . at the same timewe demonstrate the scaling of statistical properties with longer time window of return .recently we proposed the double stochastic model of return in financial market based on the nonlinear sde .the main advantage of proposed model is its ability to reproduce power spectral density of absolute return as well as long term pdf of return . in the modelproposed we assume that the empirical return can be written as instantaneous -gaussian fluctuations with a slowly diffusing parameter and constant q - gaussian distribution of can be written as follows: the parameter serves as a measure of instantaneous volatility of return fluctuations .see , for the empirical evidence of this assumption . here is defined in the selected time interval as a difference of logarithms of asset prices : - \ln[p(t ) ] \right| .\ ] ] in this paper we consider dimensionless returns normalized by its dispersion calculated in the whole length of realization .it is worth to notice that is an additive variable , i.e. , if , then , or in the continuous limit the sum may be replaced by integration .we do propose to model the measure of volatility by the scaled continuous stochastic variable , having a meaning of average return per unit time interval . by the empirical analyses of high frequency trading data on nyse we introduced relation: where is an empirical parameter and the average return per unit time interval be modeled by the nonlinear sde , written in a scaled dimensionless time : \frac{\left ( 1+x^2 \right)^{\eta-1}}{(\epsilon \sqrt{1+x^2 } + 1)^2 } x \rmd t_s + \frac{\left ( 1+x^2 \right)^{\frac{\eta}{2}}}{\epsilon \sqrt{1+x^2 } + 1 } \rmd w_s.\ ] ] here are five more empirically defined parameters : - exponent of multiplicativity , - power law exponent of long range pdf , - parameter dividing diffusion into two areas : stationary and excited one , - time scale adjustment parameter and - the upper limit of diffusion .the term excludes divergence of to the infinity .seeking to discover the universal nature of financial markets we consider that all these parameters are universal for all stocks traded on various exchanges . in this paperwe analyze empirical data from very different exchanges new york , one of the most developed with highly liquid stocks , and vilnius , emerging one with stocks traded rarely .we solve numerically introducing variable steps of dimensionless time : where is precision parameter of numerical calculations , which should be less than 1 .then sde , , can be replaced by iterative equation: x_k + \kappa \sqrt{1+x_k^2 } \zeta_k , \ ] ] where is a normally distributed random variable with zero mean and unit variance .in paper we analyzed the tick by tick trades of 24 stocks , abt , adm , bmy , c , cvx , dow , fnm , ge , gm , hd , ibm , jnj , jpm , ko , lly , mmm , mo , mot , mrk , sle , pfe , t , wmt , xom , traded on the nyse for 27 months from january , 2005 , recorded in the trades and quotes database .the parameters of stochastic model presented in were adjusted to the empirical tick by tick one minute returns .an excellent agreement between empirical and model pdf and power spectrum was achieved , see fig . 3 in .the same empirical data and model results with slightly changed values of parameters are given in ( a , b ) .noticeable difference in theoretical and empirical pdfs for small values of return are related with the prevailing prices of trades expressed in integer values of cents .we do not account for this discreteness in our continuous description . in the empirical power spectrum one - day resonance- the largest spike with higher harmonics - is present . this seasonality - an intraday activity pattern of the signal - is not included in the model either and this leads to the explicable difference from observed power spectrum .provided that we use scaled dimensionless equations derived while making very general assumptions , we expect that proposed model should work for various assets traded on different exchanges as well as for various time scales .we analyze tick by tick trades of 4 stocks , apg1l , ptr1l , srs1l , ukb1l , traded on vse for 50 months since may , 2005 , trading data was collected and provided for us by vse .stocks traded on vse in comparison with nyse are less liquid mean inter - trade time for analyzed stocks traded on vse is 362 s , while for stocks traded on nyse mean inter - trade time equals 3.02 s. the difference in trading activity exceeds 100 times .this great difference is related with comparatively small number of traders and comparatively small companies participating in the emerging vse market .do these different markets have any statistical affinity is an essential question from the theoretical point of market modeling .first of all we start with returns for very small time scales . for the vse up to 95% of one minute trading time intervalselapse without any trade or price change .one can exclude these time intervals from the sequence calculating pdf of return . withsuch simple procedure calculated pdf of vse empirical return overlaps with pdf of nyse empirical return ( see ( a ) ) . ; ; ; ; ; ; .pdf of normalized absolute returns is given on ( a),(c),(e ) and psd on ( b),(d),(f ) .( a ) and ( b ) represents results with ; ( c ) and ( d ) ; ( e ) and ( f ) .empirical data from nyse is averaged over 24 stocks and empirical data from vse is averaged over 4 stocks.,title="fig : " ] ; ; ; ; ; ; .pdf of normalized absolute returns is given on ( a),(c),(e ) and psd on ( b),(d),(f ) .( a ) and ( b ) represents results with ; ( c ) and ( d ) ; ( e ) and ( f ) .empirical data from nyse is averaged over 24 stocks and empirical data from vse is averaged over 4 stocks.,title="fig : " ] + ; ; ; ; ; ; . pdf of normalized absolute returns is given on ( a),(c),(e ) and psd on ( b),(d),(f ) .( a ) and ( b ) represents results with ; ( c ) and ( d ) ; ( e ) and ( f ) .empirical data from nyse is averaged over 24 stocks and empirical data from vse is averaged over 4 stocks.,title="fig : " ] ; ; ; ; ; ; .pdf of normalized absolute returns is given on ( a),(c),(e ) and psd on ( b),(d),(f ) .( a ) and ( b ) represents results with ; ( c ) and ( d ) ; ( e ) and ( f ) .empirical data from nyse is averaged over 24 stocks and empirical data from vse is averaged over 4 stocks.,title="fig : " ] + ; ; ; ; ; ; .pdf of normalized absolute returns is given on ( a),(c),(e ) and psd on ( b),(d),(f ) .( a ) and ( b ) represents results with ; ( c ) and ( d ) ; ( e ) and ( f ) .empirical data from nyse is averaged over 24 stocks and empirical data from vse is averaged over 4 stocks.,title="fig : " ] ; ; ; ; ; ; .pdf of normalized absolute returns is given on ( a),(c),(e ) and psd on ( b),(d),(f ) .( a ) and ( b ) represents results with ; ( c ) and ( d ) ; ( e ) and ( f ) .empirical data from nyse is averaged over 24 stocks and empirical data from vse is averaged over 4 stocks.,title="fig : " ] one should use full time sequence of returns calculating the power spectrum . nevertheless , despite low vse liquidity , psd of vse and nyse absolute returns almost overlap .difference is clearly seen only for higher frequencies , when , and is related with low vse market liquidity contributing to the white noise appearance .the different length of trading sessions in financial markets causes different positions of resonant spikes . one can conclude that even so marginal market as vse retains essential statistical features as developed market on nyse . at the first glancethe statistical similarity should be even better for the higher values of return time scale .further we investigate the behavior of returns on nyse and vse for increased values of and with the specific interest to check whether proposed stochastic model scales in the same way as empirical data . apparently , as we can see in ( d ) and ( f ) psds of absolute returns on vse and on nyse overlap even better at larger time scale ( seconds and seconds ) .this serves as an additional argument for the very general origin of long range memory properties observed in very different , liquidity - wise , markets .the nonlinear sde is an applicable model to cache up observed empirical properties .pdfs of absolute return observed in both markets ( see ( c ) and ( e ) ) are practically identical , though we still have to ignore zero returns of vse to arrive to the same normalization of pdf .we proposed a double stochastic process driven by the nonlinear scaled sde reproducing the main statistical properties of the absolute return , observed in the financial markets .seven parameters of the model enable us to adjust it to the sophisticated power law statistics of various stocks including long range behaviour .the scaled no dimensional form of equations gives an opportunity to deal with averaged statistics of various stocks and compare behaviour of different markets .all parameters introduced are recoverable from the empirical data and are responsible for the specific statistical features of real markets . seeking to discover the universal nature of return statistics we analyse and compare extremely different markets in new york and vilnius andadjust the model parameters to match statistics of both markets .the most promising result of this research is discovered increasing coincidence of the model with empirical data from the new york and vilnius markets and between markets , when the time scale of return is growing .observable specific features of different markets could be a subject of another research based on the proposed model .for example , it is clear that parameter should be relevant to the maximum number of active traders in the market and consequently should be specific for the every market .further analyses of empirical data and proposed model reasoning by agent behavior is ongoing . 99 willinger , w. taqqu , m. and teverovsky , v. 1999 ._ stock market prices and long - range dependence_. finance stochast . 3 : 113 cont , r. _ long range dependence in financial markets_. springer - fractals in engineering , ( e. lutton and j. vehel , eds . ) , ( 2005 ) 159 - 180 mikosch , t. and starica , c. _ long - range dependence effects and arch modeling _ , birkhauser boston - theory and applications of long - range dependence , ( 2003 ) 439459 kirman , a. and teyssiere , g. 2002 ._ microeconomic models for long - memory in the volatility of financial time series _ , studies in nonlinear dynamics and econometrics , 5 : 281302 lux , t. and marchesi , m. 2000 . _ volatility clustering in financial markets : a microsimulation of interacting agents _ , int .finance , 3 : 675702 6 .borland , l. 2004 ._ on a multi - timescale statistical feedback model for volatility fluctuations _ , arxiv : cond - mat/0412526 duarte queiros , s.m_ on a generalised model for time - dependent variance with long - term memory_. epl , 80 : 30005 gontis , v. kaulakys b. and ruseckas , j. 2008 ._ trading activity as driven poisson process : comparison with empirical data_. j. physica a 387 : 3891 - 3896 gontis , v. ruseckas j. and kononoviius a. 2010 ._ a long - range memory stochastic model of the return in financial markets_. j. physica a 389 : 100 - 106 gontis v. and kaulakys , b. 2004 ._ multiplicative point process as a model of trading activity_. physica a 343 : 505 - 514 kaulakys , b. ruseckas , j. gontis , v. and alaburda , m. 2006 ._ nonlinear stochastic models of noise and power - law distributions_. physica a 365 : 217 - 221 .kaulakys , b. and alaburda , m. 2009 ._ modeling scaled processes and noise using nonlinear stochastic differential equations_. j. stat .plerou , v. gopikrishnan , p. gabaix , x. et al . 2001 . _ price fluctuations , market activity , and trading volume_. quant .finance 1 : 262 - 269 .gabaix , x. gopikrishnan , p. plerou , v. and stanley , h.e ._ a theory of power - law distributions in financial market fluctuations_. nature 423 : 267 - 270 sato , a .- h_ explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process_. phys .e 69 : 047101 .takayasu , m. takayasu , h. 2003 . _ self - modulation processes and resulting generic fluctuations_. phys . a 324 : 101 | we scale and analyze the empirical data of return from new york and vilnius stock exchanges matching it to the same nonlinear double stochastic model of return in financial market . |
let be a -dimensional lvy process without diffusion component , that is , .\ ] ] here , is a poisson random measure on with intensity satisfying and denotes the compensated version of .we study the case when , that is , there is an infinite number of jumps in every interval of nonzero length a.s .further , let be an -valued adapted stochastic process , unique solution of the stochastic differential equation , \label{pt.sde}%\end{aligned}\ ] ] where is an matrix .in this article we are interested in the numerical evaluation of ] .by lemma 13 in , \times \mathbb r) ] for some constant which does not depend on .our first scheme is based on matching the first 3 moments of the process .let be the unit sphere in the -dimensional space , and be a lvy measure on written in spherical coordinates and and satisfying .denote by the reflection of with respect to the origin defined by .we introduce two measures on : the _3-moment scheme _ is defined by \times s^{d-1 } } r\theta\ , \nu_\varepsilon(dr , d\theta ) , \label{pt.3mom2}\end{aligned}\ ] ] where denotes a point mass at .[ pt.3mom.prop ] for every , is a finite positive measure satisfying where the last inequality is an equality if .the positivity of being straightforward , let us check .let be the coordinate vectors .then , the other equations can be checked in a similar manner .let .then the 3-moment scheme can be written as assume or . then the solution of with the characteristics of given by satisfies | = o(\lambda_\varepsilon^{-1}).\ ] ] as . by proposition [ pt.basicbound ]we need to show that by proposition [ pt.3mom.prop ] , where in the last line the dominated convergence theorem was used . in many parametric or semiparametric models ,the lvy measure has a singularity of type near zero .this is the case for stable processes , tempered stable processes , normal inverse gaussian process , cgmy and other models .stable - like behavior of small jumps is a standard assumption for the analysis of asymptotic behavior of lvy processes in many contexts , and in our problem as well , this property allows to obtain a more precise estimate of the convergence rate .we shall impose the following assumption , which does not require the lvy measure to have a density : * * : : there exist and such that if and if . ] where .assume and or .then the solution of with the characteristics of given by satisfies | = o\left(\lambda_\varepsilon^{1-\frac{4}{\alpha}}\right).\ ] ] under , by integration parts we get that for all , therefore , under this assumption , from which the result follows directly .[ [ rate - optimality - of - the-3-moment - scheme ] ] rate - optimality of the 3-moment scheme + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + from proposition [ pt.basicbound ] we know that under the assumption or , the approximation error of a scheme of the form can be measured in terms of the -th absolute moment of the difference of lvy measures .we introduce the class of lvy measures on with intensity bounded by : the class of lvy measures with intensity bounded by is then denoted by , and the smallest possible error achieved by any measure within this class is bounded from below by a constant times .the next result shows that as , the error achieved by the 3-moment scheme differs from this lower bound by at most a constant multiplicative factor .assume and let be given by .then , _ step 1 ._ let us first compute for , let where is absolutely continuous with respect to and is singular .then and which are absolutely continuous with respect to , or , in other words , where the is taken over all measurable functions such that . by a similar argument, one can show that it is sufficient to consider only functions ] .let and be such that such a can always be determined uniquely and is determined uniquely if .it follows that is a minimizer for and therefore where and are solutions of _ step 2 ._ for every , let and be solutions of it is clear that as and after some straightforward computations using the assumption we get that then , under the three limits are easily computed and we finally get the constant appearing in the right - hand side of can not be interpreted as a `` measure of suboptimality '' of the 3-moment scheme , but only as a rough upper bound , because in the optimization problem the moment - matching constraints were not imposed ( if they were , it would not be possible to solve the problem explicitly ) .on the other hand , the fact that this constant is unbounded as suggests that such a rate - optimality result can not be shown for general lvy measures without imposing the assumption . [ [ numerical - illustration ] ] numerical illustration + + + + + + + + + + + + + + + + + + + + + + we shall now illustrate the theoretical results on a concrete example of a sde driven by a normal inverse gaussian ( nig ) process , whose characteristic function is = \exp\left\{-\delta t \left(\sqrt{\alpha^2 - ( \beta - iu)^2 } - \sqrt{\alpha^2 - \beta^2}\right)\right\},\ ] ] where , and are parameters .the lvy density is given by where is the modified bessel function of the second kind .the nig process has stable - like behavior of small jumps with , ( which means that is satisfied with ) , and exponential tail decay .the increments of the nig process can be simulated explicitly ( see ( * ? ? ?* algorithms 6.9 and 6.10 ) ) , which enables us to compare our jump - adapted algorithm with the classical euler scheme . for the numerical examplewe solve the one - dimensional sde where is the nig lvy process ( with drift adjusted to have =0 ] by monte - carlo using the 3-moment scheme described in this section ( marked with crosses ) , the diffusion approximation of ( circles ) and the classical euler scheme ( diamonds ) .the parameter values are , , , and . for each schemewe plot the logarithm of the approximation error as function of the logarithm of the computational cost ( time needed to simulate trajectories ) .the curves are obtained by varying the truncation parameter for the two jump - adapted schemes and by varying the discretization time step for the euler scheme .the approximation error for the euler scheme is a straight line with slope corresponding to the theoretical convergence rate of .the graph for the 3-moment scheme seems to confirm the theoretical convergence rate of ; the scheme is much faster than the other two and the corresponding curve quickly drops below the dotted line which symbolizes the level of the statistical error .in this section , we develop schemes of arbitrary order for lvy processes with stable - like behavior of small jumps . throughout this section , we take and let be a lvy process with characteristic triplet satisfying the following refined version of : : : there exist , , with and such that introduce the probability measure let and . the high - order scheme for the stochastic differential equation based on moments and truncation level is constructed as follows : 1 . [ pt.step1 ] find a discrete probability measure with such that , for all and for all .2 . compute the coefficients by solving the linear system 3 .the high - order scheme is defined by the first step in implementing the scheme is to solve the moment - matching problem for measure .the existence of at least one solution to this problem with and for all is guaranteed by the classical caratheodory s theorem , but this problem admits , in general , an infinite number of solutions .here we impose the additional condition and for all , which should be checked on a case by case basis in concrete realizations of the scheme ( see example [ pt.momentex ] ) .it is easy to see that the measure matches the moments of orders of , where is the measure given by that is , satisfies the assumption with equalities instead of equivalences .the idea of the method is to replace the coefficients with a different set of coefficients while keeping the same points to obtain a measure which matches the moments of .therefore , the points do not depend on the truncation parameter while the coefficients depend on it .[ pt.momentex ] as an example we provide a possible solution of the moment matching problem for , which leads to a 5-moment scheme ( matching 3 moments of or moments of the lvy process ) .we assume that has mass both on the positive and the negative half - line : .the moments of are given by it is then convenient to look for the discrete measure matching the first 3 moments of in the form where , are parameters to be identified from the moment conditions for the purpose of solving this system of equations , let be a random variable such that = p = 1- p[\mathcal e = \varepsilon_1]$ ] . from the moment conditions , we get : = \frac{2-\alpha}{3-\alpha}\label{pt.epsbar3},\qquad \sigma^2 : = \text{var}\ , \mathcal e = \frac{(2-\alpha)}{(4-\alpha)(3-\alpha)^2},\\ s & : = \frac{e[(\mathcal e - e[\mathcal e])^3]}{\sigma^3 } = 2\frac{\alpha-1}{5-\alpha}\sqrt{\frac{4-\alpha}{2-\alpha}}. \label{pt.skew3}\end{aligned}\ ] ] on the other hand , the skewness can be directly linked to the weight : and the parameters and can be linked to and : the dependence of , and on is shown in figure [ pt.moment.fig ] : it is clear from the graph that the constraints and are satisfied for all : therefore , equations ( [ pt.mubar3][pt.eps123 ] ) define a -atom probability measure which matches the first 3 moments of . ).,scaledwidth=70.0% ] let be fixed according to .there exists such that for all , is a positive measure satisfying there exist positive constants and such that [ pt.ho.prop ] assume or . then the solution of with characteristics of given by satisfies | = o\left(\lambda_\varepsilon^{1-\frac{n+3}{\alpha}}\right).\ ] ] the moment conditions hold by construction . using integration by parts ,we compute therefore , since the matrix , , is invertible ( vandermonde matrix ) , this implies that . therefore , there exists such that for all , for all and is a positive measure .we next compute : research is supported by the chair financial risks of the risk foundation sponsored by socit gnrale , the chair derivatives of the future sponsored by the fdration bancaire franaise , and the chair finance and sustainable development sponsored by edf and calyon . | we propose new jump - adapted weak approximation schemes for stochastic differential equations driven by pure - jump lvy processes . the idea is to replace the driving lvy process with a finite intensity process which has the same lvy measure outside a neighborhood of zero and matches a given number of moments of . by matching 3 moments we construct a scheme which works for all lvy measures and is superior to the existing approaches both in terms of convergence rates and easiness of implementation . in the case of lvy processes with stable - like behavior of small jumps , we construct schemes with arbitrarily high rates of convergence by matching a sufficiently large number of moments . = 1 key words : lvy - driven stochastic differential equation , euler scheme , high order discretization schemes , jump - adapted discretization , weak approximation . 2010 mathematics subject classification : primary 60h35 , secondary 65c05 , 60g51 . |
in this supplementary notes , we want to explicitly outline how , starting from the master equation associated with the stochastic may - leonard model [ defined by the reactions ( 1 ) ] , the set of stochastic partial differential equations ( 3 ) can be obtained via system size expansion . for the sake of illustration , here we focus on the role of internal noise stemming from the reactions ( 1 ) , and ignore spatial degrees of freedom . as detailed in a forthcoming publication[ 18 ] , and following the reasoning presented in [ 16 ] , in a proper continuum limit , the spatial dispersal of individuals is accounted by diffusive terms in the spde ( 3 ) . + as in the main text , the overall number of individuals is denoted and stands for the frequencies ( or densities ) of the species , , and in the population ( i.e. and ) .the master equation giving the time - evolution of the probability of finding the system in the state at time then reads where denotes the transition probability from state to the state within one time step ( loss term ) , is the analogous gain term , and the summation extends over all possible changes . as an example , the relevant changes in the density resulting from the basic reactions ( 1 ) are in the third reaction , in the fourth , and zero in all others .we also choose the unit of time such that , on average , every individual reacts once per time step .the transition rates resulting from the reactions ( 1 ) then read for the reaction ( the prefactor of enters due to our choice of time scale , where reactions occur in one unit of time ) and for .transition probabilities associated with all other reactions ( 1 ) follow similarly .the system size , or kramers - moyal , expansion ( sze ) [ 16 ] of the master equation is an expansion in the increment , which is proportional to .therefore , the sze may be understood as an expansion in the inverse system size . to second order in , it yields the ( generic ) fokker - planck equation [ 16 ] : +\frac{1}{2}\partial_i\partial_j[\mathcal{b}_{ij}({\bm s})p({\bm s},t ) ] ~. \label{fokker_planck}\ ] ] for the system under consideration , in the above the indices and the summation convention in ( [ fokker_planck ] ) implies sums carried over them . according to the kramers - moyal expansion ( or sze ) , the quantities and are given by [ 16 ] note that is symmetric . for the sake of clarity, we outline the calculation of : the relevant changes result from the third and fourth reactions in ( 1 ) , as described above .the corresponding rates respectively read and , resulting in .the other quantities are computed analogously .all explicit expressions for and will be derived and given in detail in [ 18 ] .the well - known correspondence between fokker - planck equations and ito calculus [ 16 ] implies that ( [ fokker_planck ] ) is equivalent to the following set of ito stochastic differential equations ( with the above summation convention ) : where the matrix is defined from via the relation [ 16 ] , and the s denote ( uncorrelated ) gaussian white noise terms .note that for the model under consideration is diagonal and one can therefore always choose a diagonal matrix for [ see eq .( 5 ) ] , with only s contributing to the right - hand side of eqs . ( 11 ) . in ref.[18 ] , we demonstrate that ( in a proper continuum limit ) spatial degrees of freedom and exchange processes simply yield additional diffusive terms in ( [ stoch_part_eq ] ) .this leads to the spde ( 3 ) , given and discussed in the main text , in which the still denote gaussian white noise contributions .the spde ( 3 ) and the cgle ( 6 ) , as well as the analytical predictions ( 7 ) , are valid in all spatial dimensions . while in the text , for the sake of specificity, we mainly focus on the two - dimensional situation , here we comment on some features of the one - dimensional version of the system , as well as on some properties of the general situation in higher dimensions . in one dimension and in the absence of exchange processes( mixing ) , our model is expected to exhibit coarsening like the cyclic lotka - volterra model ( see e.g. [ 12 ] ) .the same phenomenon still occurs in the presence of very slow mixing rate ( for the model under consideration , this means very low values of the exchange rate ) . on the other hand , and as shown in the main text , in the presence of ( finite ) mixing the systems behavior is aptly described by the spde ( 3 ) .the underlying cgle ( 6 ) predicts the propagation of traveling waves , with velocity and wavelength still given by the analytical expression ( 7 ) .similarly to what has been found in the two - dimensional system [ 6 ] , if the exchange rate is `` moderate '' , i.e. below a certain mobility threshold but still finite [ 6,15 ] , the system confirms these predictions and the species coexist .however , due to stochastic events and the presence of absorbing boundaries , some domains will occasionally merge , resulting in growing domains .this coarsening phenomenon will happen on a much longer time - scale than in the absence of the mixing processes . above the threshold value for the diffusivity ,the system is well - mixed , the underlying spatial structure plays no role , and the description in terms of the rate equations ( 2 ) is valid . in this mean - field scenario ,the system approaches an absorbing steady state and no patterns emerge . in higher dimensions ( i.e. in dimensions ) ,the description of the stochastic spatial system in terms of the spde ( 3 ) and of the cgle ( 7 ) is again valid for `` moderate '' ( or intermediate ) mixing ( i.e. for a finite mobility rate which is below a certain critical threshold , see [ 6 ] ) . in this situation , and as discussed in the main text , the description in terms of the cgle ( 6 ) is qualitatively valid and predicts the emergence of moving spiral waves in two dimensions ( which is the case discussed in detail in the main text , see also [ 6,15 ] ) , and to `` scroll waves '' , i.e. vortex filaments , in three dimensions [ 24 ] . | noise and spatial degrees of freedom characterize most ecosystems . some aspects of their influence on the coevolution of populations with cyclic interspecies competition have been demonstrated in recent experiments [ e.g. b. kerr et al . , nature * 418 * , 171 ( 2002 ) ] . to reach a better theoretical understanding of these phenomena , we consider a paradigmatic spatial model where three species exhibit cyclic dominance . using an individual - based description , as well as stochastic partial differential and deterministic reaction - diffusion equations , we account for stochastic fluctuations and spatial diffusion at different levels , and show how fascinating patterns of entangled spirals emerge . we rationalize our analysis by computing the spatio - temporal correlation functions and provide analytical expressions for the front velocity and the wavelength of the propagating spiral waves . understanding the combined influence of spatial degrees of freedom and noise on biodiversity is an important issue in theoretical biology and ecology . this implies to face the challenging problem of studying complex nonequilibrium structures , which form in the course of nonlinear evolution . more generally , self - organized nonequilibrium patterns and traveling waves are ubiquitous in nature and appear , for instance , in chemical reactions , biological systems , as well as in epidemic outbreaks . among the most studied types of patterns are spiral waves , which are relevant to autocatalytic chemical reactions , aggregating slime - mold cells and cardiac muscle tissue . in all these _ nonequilibrium _ and _ nonlinear _ processes , as well as in population dynamics models , pattern formation is driven by diffusion which , together with internal noise , act as mechanisms allowing for stabilization and coevolution of the reactants . in this work , we consider a paradigmatic spatially - extended species population system with cyclic competition , which can be regarded as a simple food - chain model . in fact , such a system is inspired by recent experiments on the coevolution of species of bacteria in cyclic competition . using methods of statistical physics , we study the influence of spatial degrees of freedom and internal noise on the coevolution of the species and on the emerging spiral patterns . in particular , we compute the correlation functions and provide analytical expressions for the spreading speed and wavelength of the propagating fronts . to underpin the role of internal noise , the results of the stochastic description are compared with those of the deterministic equations . in this letter , we investigate a stochastic spatial variant of the _ rock - paper - scissors game _ ( also referred to as cyclic lotka - volterra model ) . these kinds of systems have been studied both from a game - theoretic perspective , see e.g. and references therein , and within the framework of chemical reactions , revealing rich spatio - temporal behaviors ( e.g. emergence of rotating spirals ) . while our methods have a broad range of applicability , they are illustrated for a prototypical model introduced by may and leonard where species , and undergo a cyclic competition ( codominance with rate ) and reproduction ( with rate ) , according to the reactions hence , an individual of species will consume one of species ( ) with rate and will reproduce with rate if an empty spot , denoted , is available ( , i.e. there is a _ finite _ carrying capacity ) . in addition , to mimic the possibility of migration , it is realistic to endow the individuals with a form of mobility . for the sake of simplicity , we consider a simple exchange process , with rate , among any _ nearest - neighbor _ pairs of agents : . if one ignores the spatial structure and assumes the system to be well - mixed ( with an infinite number of individuals ) , the population s mobility plays no role and the dynamics is aptly described by the deterministic rate equations ( re ) for the densities of species and , respectively . introducing , the re read : , \quad i\in \{1,2,3\}\end{aligned}\ ] ] where the index is taken modulo 3 and is the total density . as shown by may and leonard ( see also ) , these equations possess 4 absorbing fixed points , corresponding to a system filled with only one species and to an empty system . in addition , there is a reactive fixed point , corresponding to a total density . a linear stability analysis shows that is _ unstable_. the absorbing steady states , and are heteroclinic points . the existence of a lyapunov function allows to prove that , within the realm of the above re , the phase portrait is characterized by flows spiraling outward from , with frequency ] and $ ] , upon ignoring nonlinearities like , one is left with the following cgle in the variable : with , and . the general theory of front propagation predicts that eq . ( [ cgle ] ) always admits traveling waves as stable solutions ( i.e. no benjamin - feir or eckhaus instabilities occur ) . we have determined such periodic solutions by computing , from the dispersion relation of ( [ cgle ] ) , the spreading velocity and the spirals wavelength ( details will be given in ) : in the stochastic version of the model , the wavelength and velocity of the wavefronts have been found to agree with those of the deterministic treatment . hence , the expressions ( [ res ] ) also apply ( for large , with ) to the results of lattice simulations ( rescaled by a factor ) and to the solution of the spde ( [ spde ] ) . for instance , on a square grid with , lattice simulations and eqs . ( [ spde ] ) yield , in good agreement with the prediction of ( [ res ] ) : . for the spirals wavelength , numerical results ( lattice simulations and spde ) yield as predicted by ( [ res ] ) . in fig . [ wlength ] , the analytical prediction ( [ res ] ) for is compared with the values obtained from the spde ( [ spde ] ) , yielding a remarkable agreement for the functional dependence on the parameter . yet , as eq . ( [ cgle ] ) does not account for all nonlinearities , the analytical and numerical values differ by a prefactor ( considered in fig . [ wlength ] ) . it can still be noted that ( [ cgle ] ) and the predictions ( [ res ] ) are valid in all dimensions . , where is the spirals wavelength of the propagating spiral waves . analytical results ( red curve , rescaled by a factor ; see text ) are compared with the solution of the spde ( [ spde ] ) ( black circles ) . [ wlength ] ] motivated by recent experiments , we have considered a spatially - extended model with three species in cyclic competition , and focused on the spatial and stochastic effects . the local character of the reactions and internal noise allow mobile populations to coexist and lead to pattern formation . we have shown that already for finite mobility the lattice model can be described by spde . with the latter and lattice simulations , we have studied how entanglement of spirals form and obtained expressions for their spreading velocity and wavelength . the size of the patterns crucially depends on the diffusivity : above a certain threshold the system is covered by one species . in the absence of noise , the equations still predict the formation of spiral waves , but their spatial arrangement depends on the initial conditions . support of the german excellence initiative via the program nanosystems initiative munich " is gratefully acknowledged . m. m. is grateful to the humboldt foundation for support through the grant iv - scz/1119205 . + currently at : mathematics institute & centre for complexity science , university of warwick , coventry cv4 7al , u.k . 26 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , * * , ( ) . , _ _ ( , ) ; , _ _ ( , ) . , _ _ ( , ) . , * * , ( ) . , * * , ( ) ; * * , ( ) . , , , * * , ( ) . in _ _ , edited by ( , ) ; , * * , ( ) ; , , , * * , ( ) . , * * , ( ) , _ _ ( , ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , _ _ ( , ) , ed . , , , . http://www.xmds.org . , * * , ( ) . , , , * * , ( ) . , _ _ ( , ) , ed . , * * , ( ) . , * * , ( ) . + |
signals transmitted over wireless channels undergo severe degradations due to effects such as path loss , shadowing , fading , and interference from other transmitters , in addition to thermal noise at the receiver . one major way to combat static fading is to provide diversity in either time , frequency , or space .for this purpose , multiple - antenna systems that provide high orders of spatial diversity and high capacity have been extensively studied . however , due to limited terminal sizes , the implementation of two or more antennas may be impossible . based on the seminal works in and , the authors in set up a framework for _ cooperative communications _, where multiple terminals use the resources of each other to form a virtual antenna array .following these works , many researchers have proposed distributed communication schemes and analyzed their outage probability behavior such as in .the main protocols that have been proposed are the _ amplify - and - forward _ , where the relay only amplifies the signal received from the source , before transmitting it to the destination , and the _ decode - and - forward _ , where the relay decodes the received signal before transmitting it to the destination . in this paper, we study the performance of distributed space - time bit - interleaved coded modulations ( d - st - bicm ) schemes for non - orthogonal amplify - and - forward protocols .furthermore , we focus on situations where the transmitter to relay and inter - relay links quality is highly better than the transmitter to receiver link quality .this situation occurs for example when deploying professional relays on top of buildings in a way to improve the link reliability in low coverage zones of a multi - cellular system .the paper is organized as follows : section [ matryoshka ] defines the matryoshka block - fading channel , a channel that characterizes the cooperative protocol considered in this paper . in section [ system_model ] , we describe the system model and all the parameters involved in our study .we then derive bounds on the diversity of d - st - bicm for the minimum cooperation frame length in section [ div1 ] , and section [ div2 ] extends these results for any length .section [ simulations ] shows simulation results for different network topologies , while section [ conclusions ] gives the concluding remarks .in this paper , we consider the block - fading channel model in which a d - st - bicm codeword undergoes a limited number of fading channel realizations , namely one fading coefficient per spatial path . for the sake of analysis , we introduce a block - fading channel model where the set of random variables of a higher diversity block always includes the set of random variables of a lower diversity block , in a way similar to nested matryoshka dolls .let us consider independent fading random variables providing a total diversity order of .let be a channel built from the concatenation of blocks , where and are respectively the sets of diversity orders and lengths of each block . as usual ,the integer denotes the cardinality of the set .the -th block has a diversity order equal to and its fading set is with , fading random variables , such that .thus , we have and or equivalently is the maximum diversity order . this channel defined by nested fading setsis referred to as a matryoshka channel and it is illustrated in fig .[ fig : matr ] .let us now transmit a bpsk - modulated and interleaved codeword of a rate- code over the channel .first , let us focus on the pairwise error probability ( pep ) of two given binary codewords and .due to the channel model , the diversity order of this pep is equal to the diversity order of the lowest index block observing a non - zero part of .the performance of the coded modulation has a diversity order upper - bounded by defined as follows : the diversity observed after decoding a rate- linear code transmitted over a channel is upper - bounded by where is given by the following inequalities : and is achievable for any systematic linear code .+ [ prop1 ] _ proof : _ this proof is inspired from the singleton bound s one .the code has length and dimension , where and . if , whatever the code is , a puncturing of the last bits leads to a zero minimum hamming distance because means that there always exists two codewords and such that the last bits of are null , and involves that .let us now suppose that the code is linear and systematic . if the information bits are transmitted on the blocks of highest diversity order and if , the hamming distance after puncturing the last bits remains strictly positive and induces that . + it is straightforward to show that the bound on the diversity order applies to any discrete modulation . as a remark , in order to achieve the upper - bound on the diversity of a block - fading channel , non - zero bits of word be placed in as many independent blocks as given by the singleton bound . for matryoshka channels ,the bound is achieved as soon as one non - zero bit of any word is placed in a block of diversity higher than .where is the length- vector of received signals and is the length- vector of -qam symbols . is a precoding matrix , and is upper - triangular as shown in ( 4 ) .+ \ ] ] \ ] ] with : finally , the vector is a length- colored gaussian noise vector as given by ( 5 ) .we set : = 2 n_0 { \bf \theta}\ ] ] where the operator denotes transpose conjugate . by performing a cholesky decomposition on ,we get : thus the equivalent channel model becomes : where is a white gaussian noise vector . + digital transmission is made as follows : uniformly distributed information bits are fed to a binary convolutional encoder .coded bits are then interleaved and gray mapped into qam symbols .the qam symbols are then rotated via and transmitted on the ssaf channel defined by given in ( [ vector_form ] ) .the coherent detector at the destination computes an extrinsic information based on the knowledge of , the received vector , and independent _ a priori _information for all coded bits .the channel decoder then computes _ a posteriori _ probabilities ( app ) based on the de - interleaved extrinsic information coming from the detector using the forward - backward algorithm .the transmitted information rate is equal to bits per channel use , where the cardinality of the qam constellation is . as a remark ,one precoded symbol at the output of is transmitted over a row of the channel matrix and thus experiences a set of random variables .if we assume that the quality of the source to relays and inter - relays links is much better than the source to destination or relay to destination links , we can then focus on the or random variables to understand the diversity behavior of such a system . indeed , in the context of professional relay deployment on top of buildings, we may assume that the relays are placed and have their antennas tuned to ensure a good link quality with the base station .furthermore , in the case of detect - and - forward or decode - and - forward protocols , this assumption is still relevant .finally , one precoded symbol transmitted on the -th row of the channel matrix sees a set of fading variables included in the set seen by a symbol sent on the -th row .hence , we will see in the sequel that the equivalent channels obtained by the use of a sequential slotted amplify and forward protocol fall into the class of matryoshka channels .the maximum diversity inherent to the ssaf channel is , and it can be collected by an app detector ( at the destination ) if a full - diversity linear precoder is used at the transmitter .the precoder mixes the constellation symbols being transmitted on the channel providing full diversity with uncoded systems and without increasing the complexity at the detector . using precoders that process spreading among more than time slotscan further improve the performance . from an algebraic point of view , a linear precoder of size is the optimal configuration to achieve good coding gains ( without channel coding ) at the price of an increase in detection complexity ( the complexity of an exhaustive app detector grows exponentially with the number of dimensions ) .on the other hand , for coded systems transmitted on block - fading channels , the channel decoder is capable of collecting a certain amount of diversity that is however limited by the singleton bound . in ,the modified singleton bound taking into account the rotation size over a mimo block - fading channel is used to achieve the best tradeoff between complexity and diversity .for this purpose , we derive hereafter an upper - bound on the diversity order of a coded transmission over a precoded -slot ssaf channel , and then deduce the precoding strategy to follow in order to achieve full diversity .we will first assume that the interleaver of the bicm is ideal , which means that for any pair of codewords , the non - zero bits of are transmitted in different blocks of time periods , which means that no inter - slot inter - bit interference is experienced .the interleaving , modulation and transmission through the channel convert the codewords and onto points and in a euclidean space . for a fixed channel , the performanceis directly linked to the euclidean squared distance , that can be rewritten as a sum of squared euclidean distances associated to the non - zero bits of .for each of the squared euclidean distances , we can build an equivalent channel model which corresponds to the transmission of a bpsk modulation over one row of the channel matrix .thus , several squared euclidean distances appear to be transmitted on the same equivalent channel and the squared distance can be factorized as follows : where is linearly dependent on the norm of the -th row of . in other words , at the output of the app detector ,an equivalent block - fading channel is observed and the constituent blocks do not have the same intrinsic diversity order : a soft output belonging to the -th block carries the attenuation coefficients . as a remark ,blocks are sorted such that the -th block carries a diversity order of and the subset of realizations of random variables observed in the -th row of is included in the subset of random variables observed in the -th row of . as the same modulation is used on each time slot of the relaying protocol, each block length is equal to .finally , the equivalent -slot ssaf channel at the output of the app detector is a matryoshka ] channel , where is the number of coded bits per codeword . with this observation, we can conclude that the upper - bound on the diversity order of a non - precoded ssaf channel is which is equal to the classical singleton bound on the diversity order of block - fading channels , with the difference that it can be achieved by any systematic code . for the sake of generalization, we now suppose that modulations with different spectral efficiencies are sent over the slots of the cooperation frame .we define as the number of bits carried by one symbol of the modulation transmitted on the -th time slot . in this case , the block fading channel is a , \left[\frac{m_{1}n}{\sum_{k=1}^{\beta + 1 } m_k } , \frac{m_{2}n}{\sum_{k=1}^{\beta + 1 } m_k } , ... , \frac{m_{\beta+1}n}{\sum_{k=1}^{\beta + 1 } m_k } \right ] \right ) ] and ] and ] and ] matryoshka channel . by applying ( [ matryoshkabound ] ), we obtain that if : with : then the achievable diversity order is .thus , the parameters allow for a fine tuning of the target diversity for a given coding rate .this tuning allows to further improve the coding gain .unfortunately , the theoretical of coding gains for coded modulations on block fading channels is difficult and often by extensive computer simulations ., the analysis of such a design is out of the scope of this paper , which mainly focuses on diversity orders optimization .so far , we have considered the -relay ssaf protocol with length- cooperation frames . in ,the authors consider a cooperation scheme ( for -relay ssaf and higher ) in which the cooperation frame is stretched in a way to protect more symbols .in other words , we consider the -slot -relay ssaf protocol with : where is the number of additional slots .the goal of this extension is to increase the number of coded bits that experience full diversity .the first symbols in * x * from ( [ channel_model ] ) will have maximum diversity , which reduces to the first symbol having maximum diversity in the -slot ssaf scenario .however , this additional protection entails an increase in the size of * x * , thus complexity at the app detector increases as well .+ an illustration of this scheme is provided in fig .[ m - slot ] for the -slot -relay ssaf protocol ; the source always transmits a constellation symbol , and starting from the second time slot , the relays cooperate in a round robin way ; in this case , the first out of a total of constellation symbols have a maximum diversity .it is then clear that this protocol allows to achieve full diversity with higher coding rates . in the sequelwe will provide bounds on the diversity order of coded modulations under this cooperative protocol .we first consider the -slot -relay ssaf protocol without precoding .we thus obtain a matryoshka block - fading channel as with ] .this means that in a cooperation frame of length , there are symbols that have maximum diversity .this makes clear the fact that higher coding rates can be attained with this scheme .the diversity of a non - precoded bicm over this protocol is given by : hence , we attain the maximum diversity order if : which implies that we can - theoretically - achieve full diversity with a coding rate getting close to as increases , but at the price of an app detection complexity increase . to illustrate this bound on diversity , fig .[ rate ] gives examples of the -relay , -relay , and -relay ssaf channels .we noticed that with increasing the number of slots , the maximum coding rate has a logarithmic - like growth , while the complexity at the detector increases exponentially ( as the cardinality of the received vector is ) .this means that only few additional slots can be practically added to the cooperation frame in order to provide a reasonable rate / complexity tradeoff . if we precode the first symbol with maximum diversity with the symbols having the lowest diversity orders we obtain a block - fading channel where ]it is clear then that we provide symbols having maximum diversity with precoding .the bound on diversity with a single precoder is given by : full diversity is obtained for which , again , shows that linear precoding can be used to increase the obtained diversity without increasing the complexity of an optimal app detector .in this section , word error rate performances of different d - st - bicm schemes are compared to information outage probability for different system configurations .we consider the single - relay ( fig .[ 1 - 64qam ] ) , two - relay ( fig .[ 2 - 16qam ] ) , and three - relay ( fig . [ 3-qpsk ] ) half - duplex ssaf cooperative channels with different coding rates and constellation sizes .we use interleavers designed as in with an additional constraint to transmit the systematic bits on the higher diversity blocks of the equivalent matryoshka channels .we set the values of , and , so that the received average energy over all the time slots is invariant .the space - time precoders are built using algebraic rotations from ( see appendix [ appen ] for further details ) , and the number of iterations between the detector and the decoder is fixed to . fig .[ 1 - 64qam ] shows the performance of st - bicm over the single - relay ssaf channel using 64-qam modulation and half - rate coding .following , no rotation is needed , as the channel decoder with optimized interleaving is capable of recovering the maximum available diversity . for small to moderate signal - to - noise ratios , and due to noise amplification at the relay , precoding the signal constellation does not affect the performance .from moderate to high signal - to - noise ratios , a rotation yields a severe performance degradation .this is due to the fact that interference between symbols ( due to the rotation ) becomes too heavy for the decoder and thus affects the coding gain . in fig .[ 2 - 16qam ] , various coding strategies using rsc codes for the -relay ssaf protocol , all at an information rate of b / s / hz , are compared to gaussian input outage probability .the first observation is that orthogonal coded schemes suffer from weak coding gains , although providing full diversity . for the curves employing the ssaf protocol , coding strategies following in ( [ d1 ] ) , in ( [ oneprecoder ] ) , and the bound on the coding rate derived in section [ diff_mod ] .the best strategy is shown to be the code with an rotation with qpsk modulation in the three slots , following .[ 3-qpsk ] shows the performance of the ssaf with three relays using qpsk modulation and the half - rate rsc code .the three strategies following and achieve full diversity with . in caseno precoder is available at the source and we want to transmit at the same coding rate , another option is to follow from ( [ d4 ] ) , thus extending the cooperation frame with slots .this strategy allows to achieve full diversity without precoding , as shown with the dashed blue curve .we studied coding strategies for the non - orthogonal amplify - and - forward half - duplex cooperative fading channel .finally , performances close to outage probabilities for different number of relays , coding rates , and constellation sizes are shown .the real cyclotomic rotation from can be written as : \ ] ] with .suppose now we have to transmit a half - rate code over the relay ssaf channel . according to , one rotation with is sufficient .this gives the following space - time precoder : \ ] ] according to , we need two rotations with each .this gives the following space - time precoder : \ ] ] e. viterbo , `` table of best known full diversity algebraic rotations , '' available at : link : www1.tlc.polito.it/~ viterbo / rotations / rotations.html[www1.tlc.polito.it/~ viterbo / rotations / rotations.html ] .-slot ssaf protocol with two relays .the cooperation frame has length , a light gray rectangle means that the terminal is emitting , a dark blue rectangle means the terminal is receiving .a white rectangle means the terminal is inactive . ] | in this work , we consider the problem of coding for the half - duplex non - orthogonal amplify - and - forward ( naf ) cooperative channel where the transmitter to relay and the inter - relay links are highly reliable . we derive bounds on the diversity order of the naf protocol that are achieved by a distributed space - time bit - interleaved coded modulation ( d - st - bicm ) scheme under iterative app detection and decoding . these bounds lead to the design of space - time precoders that ensure maximum diversity order and high coding gains . the word error rate performance of d - st - bicm are also compared to outage probability limits . |
power quality control is one of the major concerns for power delivery systems to function reliably , and it requires measurements of voltage characteristics , among which the frequency measurement is a non - trivial task due to the presence of voltage sags and voltage harmonics mostly caused by nonlinear loads . in the particular case of three - phase power systems , the clarke s , transformation is widely used as the preprocessing method to create a complex - valued single - phase signal from the real - valued three - phase signals , so that traditional complex - valued spectrum estimation methods can be applied , such as the mvdr method or the recently proposed iterative mvdr ( i - mvdr ) method . to improve the resolution, we can further apply the subspace methods and one representative example is the music method .however , all the zero sequence voltages will be cancelled out in the complex - valued signal and hence can not be detected .although these harmonic voltages would simply be blocked by a delta transformer , they will add up in the neutral , leading to overheating in the transformer and potential fire hazards . to detect these harmonics , as well as harmonics of other orders ,we propose a quaternion - valued model and all the traditional spectrum estimation methods can be extended to this domain , such as mvdr and music .we will show that harmonics of all orders will be reserved in the resulting quaternion - valued signal and will be detected by relevant estimation methods .the rest of this paper is organised as follows .a brief review of the complex - valued model is presented in section ii .our quaternion - valued model is proposed in section iii , together with the fourier analysis and the mvdr and music - like estimation algorithms .simulation results are provided in section iv and conclusions are drawn in section v.we consider the following discrete - time balanced three - phase power system in the presence of harmonic distortions : where are the amplitudes of the harmonic signals , is the fundamental ( angular ) frequency , is the sampling interval , is the signal phase , and are the measurement noise . traditionally , the three - phase signals will be converted to a complex - valued single - phase signal via the clarke s transformation .firstly , the three - phase signals are mixed into two parts , namely and , where then these two parts will be merged as a complex - valued signal . with this complex - valued signal, we can exploit the mvdr spectrum to locate the frequencies , and it is given by : where is the hermitian - transpose operation , is the covariance matrix of dimension , and ^\mathrm t\ ] ] is the frequency sweeping vector .we can also use the music spectrum which is expressed as : where denotes the euclidean norm , represents the noise subspace and comprises the eigenvectors of the covariance matrix which are corresponding to the smallest eigenvalues , where is assumed to be known or can be estimated using the information theory methods . in practice, the covariance needs to be updated and estimated from the average of samples , where where is the number of observations . in detail , is composed of complex - domain harmonic signals that can be divided into two categories plus noise , where is the summation of all positive sequence voltages , and is the summation of all negative sequence voltages , and denotes the largest integer not greater than .all the zero sequence voltages have been cancelled out .zero sequence voltages of the same order are cophasial in the three voltage channels and will be eliminated since both rows of the transformation matrix are zero - mean vectors . to solve this problem, we propose our quaternion - valued approach in the next section .we construct a quaternion - valued signal from the three - phase signals as where are the three imaginary units of the quaternion algebra which are constrained by this quaternion - valued signal contains quaternion - domain harmonic signals that belong to three categories , , plus noise , where is the summation of all the positive sequence voltages , \\ & \quad-\frac{\imath+\jmath+k}{\sqrt{3}}\sin[(3p-2)(\omega nt_s+\phi)]\big\}\\ & = \sum_{p=1}^{\lfloor\frac{h+2}{3}\rfloor}\frac{2\imath-\jmath - k}{2}v_{3p-2}e^{-\frac{\imath+\jmath+k}{\sqrt{3}}(3p-2)(\omega nt_s+\phi)}\ ; , \end{split } \label{3p-2}\ ] ] is the summation of all the negative sequence voltages , \\ & \quad+\frac{\imath+\jmath+k}{\sqrt{3}}\sin[(3p-1)(\omega nt_s+\phi)]\big\}\\ & = \sum_{p=1}^{\lfloor\frac{h+1}{3}\rfloor}\frac{2\imath-\jmath - k}{2}v_{3p-1}e^{\frac{\imath+\jmath+k}{\sqrt{3}}(3p-1)(\omega nt_s+\phi)}\ ; , \end{split } \label{3p-1}\ ] ] is the summation of all the zero sequence voltages , \\ & = \sum_{p=1}^{\lfloor\frac{h}{3}\rfloor}v_{3p}\frac{\imath+\jmath+k}{2}\big(e^{\frac{\imath+\jmath+k}{\sqrt{3}}3p(\omega nt_s+\phi)}\\ & + e^{-\frac{\imath+\jmath+k}{\sqrt{3}}3p(\omega nt_s+\phi)}\big)\;. \end{split } \label{3p}\ ] ] hence all the harmonic signals will be reserved in the quaternion - valued signal. we may observe from ( [ 3p-2])([3p ] ) that the frequencies of the harmonic signals have been mapped into the frequencies of the quaternion - valued signal associated with the axis ( see fig .axis , title="fig:",width=211 ] + + then we can adopt the mvdr spectrum in ( [ mvdr ] ) and the music spectrum in ( [ music ] ) by substituting the frequency sweeping vector as ^\mathrm t\;.\ ] ] the frequencies detected in the spectrum are either the original real - domain angular frequencies or their additive inverses , namely 1 .if a peak is detected in the spectrum in the absence of its additive inverse , it corresponds to a positive or negative sequence voltage signal and this spectrum peak indicates its angular frequency or its additive inverse .if two `` mirrored '' peaks are detected in the spectrum , they correspond to a zero sequence voltage signal and they indicate the signal s angular frequency and its additive inverse , respectively .in this section , we provide some numerical examples to illustrate the performance of the proposed quaternion model . in all experiments ,the fundamental frequency is 50 hz , the sampling frequency is khz , the initial phase is , and , .there exist a second - order and a third - order harmonic signals , both set to be 6% in amplitude . in the first experiment, we test the capability of the two modelings .the mvdr and music spectra of the quaternion- and complex - valued models are plotted in fig .2 , where snr db .it can be observed that the proposed model is able to detect all the harmonic signals , namely 50 hz ( the fundamental frequency ) , hz ( the second - order harmonic ) , and hz ( the third - order harmonic ) , while the complex - valued model fails at the third - order harmonic frequency . in the second experiment ,we test the accuracy of relevant algorithms. the estimation errors ( averaged via 300 monte carlo simulation runs ) of the quaternion- and complex - valued mvdr and music algorithms are plotted in fig .3 , where the snr value varies from 25 to 45 db . we can see that all the algorithms have similar estimation accuracy .we have presented a quaternion - valued model as an alternative preprocessing approach to convert the three - phase signals into a single - phase system . compared with the clarke s transformation, the proposed model can additionally detect the zero sequence voltages .simulated results show that the proposed model can detect all - order voltage harmonics effectively , while sharing similar estimation accuracy with the complex - valued model .m. bollen , _ understanding power quality problems : voltage sags and interruptions_. wiley - ieee press , 2000 .m. akke , `` frequency estimation by demodulation of two complex signals , '' _ ieee transactions on power delivery _ , vol .1 , pp . 157163 , jan .h. j. jeon and t. g. chang , `` iterative frequency estimation based on mvdr spectrum , '' _ ieee transactions on power delivery _2 , pp . 621630 , apr .y. xia and d. p. mandic , `` augmented mvdr spectrum - based frequency estimation for unbalanced power systems , '' _ ieee transactions on instrumentation and measurement _ , vol .62 , no . 7 , pp. 19171926 , jul . 2013 .r. o. schmidt , `` multiple emitter location and signal parameter estimation , '' _ ieee transactions on antennas and propagation _ , vol .3 , pp . 276280 , mar .. w. m. grady and s. santoso , `` understanding power system harmonics , '' _ ieee power engineering review _ ,811 , nov .j. p. ward , _ quaternions and cayley numbers : algebra and applications _ , kluwer , normwell , ma , 1997 .m. wax and t. kailath , `` detection of signals by information theoretic criteria , '' _ ieee transactions on acoustics , speech , and signal processing _assp-33 , no .2 , pp . 387392 , apr . 1985. s. j. sangwine , `` fourier transforms of colour images using quaternion or hypercomplex numbers , '' _ electronics letters _ ,19791980 , oct .j. w. tao , `` performance analysis for interference and noise canceller based on hypercomplex and spatio - temporal - polarisation processes , '' _ iet radar sonar and navigation _ , vol . 7 , no .3 , pp . 277286 , mar .s. miron , n. le bihan , j. i. mars , `` quaternion - music for vector - sensor array processing , '' _ ieee transactions on signal processing _ ,4 , pp . 12181229 , apr . | in this work , a quaternion - valued model is proposed in lieu of the clarke s , transformation to convert three - phase quantities to a hypercomplex single - phase signal . the concatenated signal can be used for harmonic distortion detection in three - phase power systems . in particular , the proposed model maps all the harmonic frequencies into frequencies in the quaternion domain , while the clarke s transformation - based methods will fail to detect the zero sequence voltages . based on the quaternion - valued model , the fourier transform , the minimum variance distortionless response ( mvdr ) algorithm and the multiple signal classification ( music ) algorithm are presented as examples to detect harmonic distortion . simulations are provided to demonstrate the potentials of this new modeling method . shell : bare demo of ieeetran.cls for ieee journals harmonics detection , fourier transform , minimum variance distortionless response , multiple signal classification , quaternion , three - phase power system |
with the recent discovery of several extra - solar planets ( and brown dwarfs ) around solar type stars ( e.g. mayor & queloz 1995 ; buter & marcy 1996 ; basri , marcy & graham 1996 ) the question of the potential existence of extraterrestrial `` intelligent '' civilizations has become more intriguing than ever .while this topic has been the subject of extensive speculations and many ill - defined ( often by necessity ) probability estimates , at least one study ( carter 1983 ) has examined it from a more global , statistical perspective .that study concluded , on the basis of the near equality between the timescale of biological evolution on earth , , and the lifetime of the sun , , that extraterrestrial civilizations are exceedingly rare , even if conditions favorable for the development of life are relatively common .the conclusion on the rarity of extraterrestrial intelligent civilizations ( carter 1983 ; and see also barrow & tipler 1986 ) was based on one crucial _assumption _ and one _ observation_. the _ assumption _ is that the timescale of biological evolution on a given planet , , and the lifetime of the central star , , are a priori entirely independent quantities . put differently, this assumes that intelligent life forms at some random time with respect to the main sequence lifetime of the star .observation _ is that in the earth - sun system ( to within a factor 2 ; for definiteness i will take from now on to represent the timescale for the appearance of land life ) . for completeness, i will reproduce here the argument briefly . _if _ and are indeed independent quantities , then most probably either or ( the set of is of very small measure for two independent quantities ) .if , however , _ generally _ , then it is very difficult to understand why in the first system found to exhibit an intelligent civilization ( the earth - sun system ) , it was found that .if , on the other hand , _ generally _ , then it is clear that the first system found to contain an intelligent civilization is likely to have ( since for a civilization would not have developed ) .thus , according to this argument , one has to conclude that typically , namely , that _ generally _ intelligent civilizations will not develop , and that the earth is the extremely rare exception .what i intend first to show in the present work is that this conclusion is at best premature , by demonstrating that not only that and may not be independent , but also that may in fact be the _ most probable _ value for this ratio .this is done in 2 .i use the recently determined cosmic star formation history to estimate the most likely time for intelligent civilizations to emerge in the universe .superficially it appears that ( which is determined mainly by biochemical reactions and evolution of species ) and ( which is determined by nuclear burning reactions ) should be independent .it suffices to note , however , that light energy ( from the central star ) exceeds by 23 orders of magnitude all other sources of energy that can drive chemical evolution in the prebiotic environment ( e.g. deamer 1997 ) , to realize that may in fact depend on ( note that the statement that intelligent life will not develop for also constitutes a qualitative dependence ) .below i identify a specific physical process that can , in principle at least , relate the two timescales .first i would like however to point out the following _ general _ property .imagine that we find that the ratio can be described by some function of the form , and that we further find that the function is monotonically increasing ( at least for the narrow range of values of corresponding to stars which allow the development of life , see below ) .this situation is shown schematically in fig . 1 .in such a case , since for a salpeter initial mass function ( salpeter 1955 ) we have that the distribution of stellar lifetimes behaves like ( since for main sequence stars , e.g.allen 1973 ) , it is immediately clear from fig . 1 , that it is _ most probable _ that in the first place we encounter an intelligent civilization , we will find that ( since the number of stars increases as we move to the right in the diagram ) . therefore , if we can show that some processes are likely to produce a monotonically increasing ( ; ) relation , then the fact that in the earth - sun system will find a natural explanation , and it will not have any implications for the frequency of extraterrestrial civilizations .i should note though that if the breadth of the ` band ' in fig .1 becomes extremely large , this is essentially equivalent to no relation and carter s argument is recovered .i will now give a simple example of how a relation may arise .i should emphasize that this is not meant to be understood as a realistic model , but merely to demonstrate that such a relation _could _ exist .nucleic acid absorption of uv radiation peaks between and and that of proteins between and (e.g .davidson 1960 ; sagan 1961 ; caspersson 1950 ) .absorption in these bands is highly lethal to all known forms of cell activity ( e.g. berkner 1952 ) .of all the potential constituents of a planet s atmosphere only o absorbs efficiently in the 20003000 range ( e.g. watanabe , zelikoff & inn 1958 ) .it has in fact been suggested that the appearance of land life has to await the build - up of a sufficient layer of protective ozone ( berkner & marshall 1965 ; hart 1978 ) .thus , it is important to understand the origin and evolution of oxygen in planetary atmospheres . while clearly only a limited knowledge of all the processes involved exists ( and even that only from the earth s atmosphere ) , this will suffice for the purposes of the present example .two main phases can be identified in the rise of oxygen in planetary atmospheres ( berkner & marshall 1965 ; hart 1978 ; levine , hays & walker 1979 ; canuto et al .in the first ( which on earth lasted yr ) , oxygen is released from the photochemical dissociation of water vapor ( this led on earth probably to oxygen levels of of the present atmospheric level ( p.a.l . ) ) . in the second phase ( which on earth lasted yr ) , the amounts of o and o reach levels p.a.l ., sufficient to shadow the land from lethal uv to allow the spread of life to dry land ( the uv extinction is normally expressed as , where is the impinging intensity , is the absorption coefficient and is the path length in the atmosphere ) . the important point to note ( berkner &marshall 1965 ; hart 1978 ) is that the duration of the first phase is inversely proportional to the intensity of the radiation in the range 10002000 ( significant peaks in h absorption exist in the 11001300 and 16001800 ranges ) .thus , for a given planetary size and orbit , the timescale for the development of shielding ( which we identify approximately with ) is dependent on the stellar spectral type , and therefore on . for typical main sequence star relations , ( with in the range 0.61 , for spectral types f5k5 , see below ) , and empirical fractions of the radiation emitted in the 10002000range ( stecker 1970 ; carruthers 1971 ) , a simple calculation ( e.g. livio & kopelman 1990 ) leads to an approximate relation of the form = 4.6 in clearly with the existence of a relation like ( 1 ) ( which is monotonically increasing ) , the highest probability is to find , and hence the near equality of these two timescales in the earth - sun system can not be taken to imply that extraterrestrial civilizations are rare .i should note again that the detailed evolution of the atmosphere is surely more complicated than a simple dependence on the intensity of uv photons .in particular , it may be that the different phases of the evolution have different dependences on the properties of the central star .the important point , however , is that , as the above example shows , the _ existence _ of a relation is not implausible , and that ) _ could _ increase faster than linearly .given that extraterrestrial `` intelligent '' civilizations may not be exceedingly rare after all , one may ask what is a likely time in the history of the universe for such civilizations to emerge .i will restrict the discussion now to carbon - based civilizations . assuming a principle of ` mediocrity ', one would expect the emergence to coincide perhaps with the peak in the carbon production rate .the main contributors of carbon to the interstellar medium are intermediate mass stars ( wood 1981 ; yungelson , tutukov & livio 1993 ) through the asymptotic giant branch ( agb ) and planetary nebulae ( pne ) phases .recent progress has been made in the understanding of the cosmic history of the star formation rate ( sfr ) ( e.g. madau et al.1996 ; lilly et al .1996 ; madau , pozzetti & dickinson 1997 ) .assuming for simplicity that all the galaxies follow the same sfr history and stellar evolution processes , we can calculate the rate of formation of planetary nebulae ( and hence the rate of carbon production ) as a function of redshift . for this purpose a population synthesis code which follows the evolution of all the stars ( assumed to be mainly in binaries ) , including all episodes of mass exchange , common envelope phases etc ., has been used ( see yungelson , tutukov & livio 1993 ; yungelson et al .1996 for details of the code ; i am grateful to lev yungelson for carrying out the simulations ) . in fig . 2 , the assumed sfr as a function of redshift ( taken as an approximation to the results in madau et al . 1996 ) and the obtained pne formation rate as a function of redshiftare shown . as can be seen , the peak in the pn rate is somewhat delayed ( to ) with respect to the peak in the sfr , and is much more shallow at , due to the build - up of a reservoir during the previous epochs .realizing that continuously habitable zones ( chzs ) exist only around stars in the spectral range of about f5 to mid k ( e.g. kasting , whitmore & reynolds 1993 ) , and that in general the biochemistry of life requires rather precise conditions , carbon - based life may be expected to start ( with the assumed sfr history ) around , corresponding to an age of the universe of yr ( for , as seems to be indicated by recent observations , e.g. garnavich et al . 1998 ; parlmutter et al.1998 ; reiss , press & kirshner 1996 ; carlberg et al .1997 , and a present age of billion years ) .given the fact that ( as i have shown in the previous section ) the time required to develop intelligent civilizations is , it is expected that civilizations will emerge when the age of the universe is billion years , or maybe even somewhat older , since the chzs around k stars are somewhat wider ( in log distance ) than around g stars . a younger emergence age will be obtained if the star formation rate does not decline at redshifts , but rather stays flat ( as is perhaps suggested by the recent cobe diffuse background experiment ; hauser et al .1998 ; calzetti & heckman 1998 ) .finally , i should note that the arguments presented in this paper should definitely not be taken as attempting to imply that extraterrestrial intelligent civilizations do exist .rather , they show that the conclusion that they do nt , is at best premature ( see also rees 1997 for discussions of related issues ) . | it is shown that , contrary to an existing claim , the near equality between the lifetime of the sun and the timescale of biological evolution on earth does not necessarily imply that extraterrestrial civilizations are exceedingly rare . furthermore , on the basis of simple assumptions it is demonstrated that a near equality between these two timescales may be the most probable relation . a calculation of the cosmic history of carbon production which is based on the recently determined history of the star formation rate suggests that the most likely time for intelligent civilizations to emerge in the universe , was when the universe was already older then about 10 billion years ( for an assumed current age of about 13 billion years ) . |
fibrous media display different degrees of meso - scale variability from ideal fibre paths .this can be due to the manufacturing of the reinforcement , handling and preparing the moulding step .resin flow inside porous media is influenced by fibres spatial variability and heterogeneity , and neglecting this causes errors in process analysis and uncertainty in measurement .thus a reliable model of fluid flow in heterogeneous media must include multiscale phenomena and capture the multiscale nature of fluid transport behaviour , at microscale , mesoscale and macroscale . in the sense the dominant processes and governing equations may vary with scales . therefore , extending from microscale level to a mesoscale one needs upscaling that allows the essence of physical processes at one level to be summarized at the larger level .however , a detailed understaning of upscaling process from microscale to mesocale has not been completed .mesoscopic and macroscopic properties of fibrous media such as porosity , fibre size distribution and permeability can be characterized through lab - scale methods while the microscale properties are uncaptured .the lack of ability for measurements on the microscale can lead to uncertainty in interpretations of the data captured at macroscale .a major challenge arising from this non homogeneity is how macroscale flow is influenced by the microscale structure ( pore spaces ) , as well as by the physical properties of the resin .permeability is an important macroscale variable representing average of microscale properties of porous media .this average permeability is the fundamental property arising from dracy s law ( [ eq.darcy ] ) : [ eq.darcy ] = .p , which describes the relation between the volume averaged fluid velocity , , the pressure gradient , , the fluid s viscosity , , and the equivalent permeability tensor k. empirical equations , such as kozeny ( [ eq.kozeny ] ) , have been developed to relate the meso - scale permeability with the microscale properties , including porosity : [ eq.kozeny ] k=.here and , where is porosity , is fibre radius , is kozeny constant , is tortuosity , is the length of streamlines , l is the length of sample and c is a proportionality constant . because the nonhomogeneous nature of porous media originates in the randomness of fibre diameter distributions , porosities and pore structure , permeability is subjected to uncertainty .causes and effects of this uncertainty have been reviewed , assuming a normal and a lognormal permeability probability density function , previous studies have modeled the effect of this uncertainty on fluid flow in porous media by employing stochastic analysis .considered as a vast area in stochastic processes , such analyzes require to identify the sources of uncertainties and to select probabilistic methods for uncertainty propagation up to different modeling levels .+ in simulation of mould filling process , such as resin transfer moulding , which are described by dracy s law ( [ eq.darcy ] ) , permeability directly affects filling time and flow pattern .an accurate probability density function for permeability is therefore vital for reliable simulations . a number of studies have used a normal probability density functions for experimentally determined permeability at macroscale .but their measurements have been mostly for small sample sizes and hence may still be the subject to experimental and statistical inaccuracy .in other words , data obtained from the experiments may not be enough to choose between two ( normal or lognormal distribution ) or more competing distribution functions .furthermore , these studies ignore the effect of microscale uncertainties on macroscale permeability uncertainties .+ different approximation methods have been used to determine the impact of microscale uncertainties on macroscale permeability uncertainties , e.g , finite element based monte carlo and lattice - boltzmann methods have been used to estimate permeability and superficial velocity of representative volume elements of porous media .the accuracy of the analytical methods has been debated because they have considered homogeneous periodic arrays of parallel fibres instead of random distributions .in addition , all of the modeling approaches require that either the distribution of at least one property of fibrous media or the distribution of macropors of fibrous media be known .another criticism is that estimating permeability by curve fitting with empirical constants is known to generate significant systematic errors .+ in view of the fact that finding an appropriate distribution function describing the spatial variation of permeability in fibrous media is a challenging problem .therefore , a method that measures the microstructural variability as input for stochastic simulations is required . in the , we analyzed the effect of tortuosity on the variability of permeability at the average local fibre volume fraction(microscale level ) .we showed that the gaussian distribution is not necessarily the most appropriate distribution for representing permeability . in this study, we capture the influence of a distribution of local fibre volume fraction ( ) on the variability of permeability . the uncertainty in this variable ( e.g, ) propagates to a larger level and is reflected in the variability of the geometry of flow affecting the final quality of composites .in order to establish a probability density function for permeability , in this study we propose that ( i ) the best probability density function may be approximated for distribution of fibre volume fraction by a normal model , ( ii ) the values of fibre volume fraction were used to compute the distribution of permeability applying the kozeny - carmen equation , ( iii ) we used the kozeny - carmen equation together with change of variable technique to determine the probability density function of permeability and subsequently the analytical approach is compared with the distribution of permeability ( figure [ fig : flowchart1 ] ) .there is no universal established relationship between fibre volume fraction and permeability . in this paperwe recall the carmen kozeny equation in order to find the pdf of permeability ( [ eq.kozeny ] ) .observe that in ( [ eq.kozeny ] ) the random variable ( rv ) is an increasing function of porosity ( assuming , are constant ) , see figure [ change ] .note also that is significantly affected by . howeverthis will not be addressed in this paper .here we use change - of - variables technique , in this study called `` the change of '' , to investigate pdfs of permeability .this technique is a common and well - known way of finding the pdf of if be a continuous random variable with a probability density function .therefore to establish the pdf of random variable , it is required to have the pdf of . thus subsequently in the next section ,we determine numerically the pdf of . ]variable fibre volume fraction for a fibrous media depends on areal weight density ( ) and thickness(t ) and can be expressed as equation ( [ eq.poro ] ) : [ eq.poro ] v_f= where stands the fibre density , n is number of layers . to attain a closed form expression of the density of ,both and are considered normal random variables with correlation coefficient , -1<<1 .when =0 , the two variables and are independent , the distribution of would have a cauchy distribution .note that the the cauchy distribution does not have finite moments of any order hence the mean and variance of are undefined .therefore , assuming a cauchy distribution would not be an appropriate model for fibre volume fraction .now set and . in this stage of workwe consider the first order taylor expansion about for ) : [ eq.taylor ] v_f:=v_f(a_w , t)=v_f(_a_w,_t)+v_f_a_w(_a_w,_t)(a_w-_a_w)+v_f_t(_a_w,_t)(t-_t)+o(n^-1),where and are the derivatives of with respect to and respectively . in agreement with , the approximation for given by [ exp : v ] ccl _v_f&=&(v_f(_a_w,_t)+v_f_a_w(_,_t)(a_w-_a_w)+v_f_t(_a_w,_t)(t-_t)+o(n^-1 ) ) + & & ( v_f(_a_w,_t))+v_f_a_w(_a_w,_t)(a_w-_a_w)+v_f_t(_a_w,_t)(t-_t ) + & = & ( v_f(_a_w,_t))=n/ a_w_f _ t. by virtue of the definition of variance we can write [ eq.var ] ^2(v_f)=^2()()^2.next , use the first order taylor expansion once again around . then owing to ( [ exp : v ] ) we approximate [ eq : apx.v ] ^2(v_f)\{(v_f(a_w , t)-v_f(_a_w,_t))^2}.substitute ( [ eq.taylor ] ) in ( [ eq : apx.v ] ) , then ( [ eq.var ] ) becomes the following : [ eq.var.1 ] ^2(v_f)&()^2 ^2(+(-_)-(h-_h ) ) + & = ( ) ^2 ^ 2(-t ) + & = ( ) ^2 ^ 2(a_w)+^2(t)-2cov(a_w , t ) + & = ( ) ^2(^2(a_w)+^2(t)-2cov(a_w , t ) ) in ( [ eq.var.1 ] ) , expresses the covariance of while as we said and are average of local areal density and thickness , respectively .now denote , as the standard deviation of rvs , and moreover as their correlation coefficient and their relationship can be expressed as : [ eq.stan ] cov(a_w , t)=(a_w)(t)_corr.in addition we define the coefficient of variation ( ) of a rv , such as : cv(v_f)= [ eq.cvvv ] consequently eqn .( [ eq.var.1 ] ) can be recast as : ^2(v_f)(_v_f)^2(cv^2(a_w)-2cv(a_w)cv(t)a_w_corr+cv^2(t))[eq.var.2]consider ( [ eq.var.2 ] ) and replace it in ( [ eq.cvvv ] ) , then the of fibre volume fraction yields : cv(v_f ) [ eq.var.3]equation ( [ eq.var.3 ] ) shows that the of rv fibre volume fraction is approximately independent from .owing to ( [ eq.var.3 ] ) at point , evidently 3d scatter plots ( [ fig : hhh ] ) exhibits variation of the of the rv fibre volume fraction as a function of the coefficient of variations thickness and local areal density .+ in figure [ fig : hhh ] , it is possible to observe that when increases from zero to one , a saddel is formed with , and then .furthermore , figure [ fig : hhh ] shows that as and approach each other too closely , moves from the top to the lowest level .+ as to the pdf of , section 2.1.1 establises experimentally that the random variable has normal distribution .although by calling the following cases we claim that this assertion in independent case is not theoretically artificial : * and are * independent*. it is straightforward that when a random variable follows cauchy distribution with parameter then has also cauchy distribution with parameter .further , according to our explanation above , we can easily check that the ratio of two independent normal random variables determines the cauchy pdf .hence as result if we consider random variables and takes normal and cauchy distributions respectively then has normal pdf . * and are * dependent*. this case is more complicated but practical .first note that if one is interested in bivariate normal distribution for pair .we address to which study the ratio of two correlated normal random variables .the author indeed has established that if and be normally distributed random variables with means , variances , and correlated coefficient , then the exact pdf of takes the form ( 1 ) in page 636 in .for simplicity we omit the form ( 1 ) here .however , in we could observe that in this special case , i.e. normally distributed the pdf of is not necessarily symmetric and normal .therefore by virtue of author s investigations , it is not clear yet that what kind of distributions should be considered for to prove analytically a normal pdf for .consequently , the solution which may come cross the mind is experimental results represented in next subsection .table [ input ] shows the specifications of a twill carbon woven fabric used for production of composite parts in this study .the fabric was cut in the warp direction .composite parts were produced with same using high injection pressure resin transfer moulding(hiprtm ) .full production details have been presented in .[ input ] in order to measure the h of each tow , series of samples were cut perpendicular to the fibre direction , the samples were polished manually in four steps ( sandpaper grits 320 , 400 , 800 , and 1000 ) , and subsequently photographed using an olimpus png3 optical microscope equipped with a ccd camera .the analysis of each tow is carried out by image processing matlab 2015 . before importing the images into the matlab workspace , arears such as edges or borders which are not in the interest of tow geometry characterizations were cropped . in projective geometryevery tow section is equivallent to an ellipse .figure [ ellipse]a illustrates how an ellipse is fitted to a tow to locate the center point based on and .then the tow thickness ( ) is equivalent to twice the length of the minor radius ( ) ( figure [ ellipse]b ) .a total of 200 optical images were collected and an ellipse is fitted to each tow . to determine the of each tow , the area of each tow ( ) in warp direction was approximated by the equivalent ellipse .then , the number of filaments per tow ( ) were counted .knowing the radius of the fibre cross section(r ) , length of warp tow ( l ) , and , the of each tow was computed from the following equation ( [ eq.ad ] ) : = [ eq.ad ] to determine distribution of , the `` r '' statistical software was employed .for each pair ( , ) , was computed , afterward a histogram was generated . a scatter plot of and for each tow is shown in figure [ fig : cv ] .the coefficient of variation for the data was 0.72 .an ellipse is fitted to the data .we can see that the ellipse extends between 100 and 200 on the 45 degree line .hence and follow a bivariate normal distribution with mean components 168.5 and 102.3 , respectively . therefore ,our results suggest that and are dependent and can be well approximated by the bivariate normal distribution . for each pair of and , was computed .figure [ fig : hist1 ] shows that the distribution of the local average fibre volume fraction follows a bell - curve distributions : the distribution of fibre volume fraction values are well approximated with a normal distribution model .a large distribution of was observed , ranging from to .the graphical analysis indicates the closeness of the local average fibre volume fraction data to the normal distribution with cumulative distribution function(cdf)(figure [ fig : cdf ] ) .the cdf plot is following a typical s curve indicative of normal distribution .it follows from the section of that the fibre volume fraction of a preform , , has a truncated normal pdf , , with mean and variance over ] , we obtain [ eq.k1 ] l f_k(k)=g^-1(k ) .f^tn_v_f(g^-1(k ) ) + = g^-1(k ) . , k_1<k <k_2 . as figure [ change ] shows this function is a one to one function so it is invertible .however , while it may be possible to find a closed form solution to this inverse , we will begin by presenting a numerical solution to this problem .a summary of statistical properties of permeability for the different cov of is represented in figure [ fig : cvflow ] .it is observed that an in - plane distribution of local fibre volume fraction results in a distribution of local permeabilities for flow perpendicular to the plane of the fibrous medium .it is established in figure [ fig : cvflow ] that domain with larger local average areal densities possess larger local average permeability values .this phenomenon can be explained in terms of probability of number of contact points between fibres .the probability of number of contact points of fibres is larger for the domains with higher local average areal density .higher contact points means stiffer arrangement of adjacent fibres , causing increased frictional resistance to fluid flow , and hence leading to less permeable area .furthermore the normalized local average permeability ( i ) initially decreases when increases due to its influence on , ( ii ) subsequently increases due to the increasing influence of on the normalized local average permeability .therefore , as expected , this implies that the heterogeneity of fibrous media has a significant impact on the magnitude of variation in permeability .the pdf ( [ eq.k1 ] ) is computed applying monte - carlos simulations ( using the `` r '' statistical software ) .monte - carlo simulation was carried out on the domain with a average fibre volume fraction of 0.78 .subsequently this analytical model fit ( [ eq.k1 ] ) was compared with what would be predicted by physically based permeability equations ( e.g. kozeny - carman([eq.kozeny ] ) ) .to do so , using the values of computed earlier , the distribution of permeability was obtained in ( [ eq.kozeny ] ) .then , probability values calculated through ( [ eq.k1 ] ) were fit to permeability values . finally , in order to find the best distribution model to the analytical pdf , different statistical distributions are examined .+ figure [ map ] shows , obtained through the method described above , ( i ) the data are not symmetric and skewed to the right , ( ii ) there is significant agreement between the analytical approach and the simulation results of permeability ( kozeny - carmen ) .the observed skewness in permeability distribution is close to what one was observed in at macrolevel .the same principle was applied to various kinds of corrolations between and .table [ com ] lists the distribution behaviour of derived from models found in the literature .the appearance of the distributions presented in table [ com ] is the same as one shown in figure [ map ] .table [ com ] shows that the skewness of lies between 1.78 - 3.45 , implying non - normal distribution of the permeability .in addition , the kurtosis values range between 9 and 24 , deviating extremely from normality .as expected , there is a significant difference between the calculated skewness and kurtosis and that of the normal distribution .[ com ] furthermore , figure [ fa ] illustrates how well the different distributions including normal , lognormal , gamma , and beta fit to the analytical probability function .it is clear that the normal and beta distributions fail to represent the distribution of permeability values .figure [ fa ] also shows a lognormal distribution as well as a gamma distribution provide a good fit for permeability values .it has been derived that both lognormal and gamma distributions can be used effectively in analyzing a non - negative right - skewed data set . as shown in figure [ fa ] , a comparison between the lognormal and gamma distributions reveals that their pdfs have similar results . to find which of these distributions gives better fit to the data ,we have considered two data transformation methods , one based on the normal approximation to the log of the data set , working on lognormal distributions and the other based on the cube root of the data , working on gamma distribution : if the data looks symmetric after log transformation , the lognormal distribution would work better to represent the variation of the permeability .if the data looks symmetric after the cube root transformation , the gamma distribution would work better to represent the variation of the permeability .according to figure [ gl ] , although the log transformation seems to fit well in the body of permeability values , the data shows to be left - skewed on log scale .the significant left - skewed on log scale also is observable on the work of zhang et al.(see figure [ zh ] ) , who evaluated the permeability from local areal weight combined with the kozeny - carmen model , suggesting that the lognormal distribution can be used to describe the permeability distribution. one final comment on which we would like to conclude this section regards entropy . in the literatureit has been proven that under constraints , known mean and variance , the normal distribution maximizes entropy .using this principle that maximizes entropy is a selecting factor for a model we calculated the entropy values of the two fits . using straightforward computations we obtain the entropies for lognormal pdf -0.931 and gamma pfd -0.887 , which coincids with the previous result claiming gamma distribution with more entropy determines the permeability data . in order to examine the applicability of gamma distributions to other empirical permeability equations ,these equations are subjected to the kolmogorov - smirnov statistics ( kss ) test for normality in terms of normal , lognormal , gamma , weibull and beta distributions .if , in the kss test p < 0.05 , there is significant probability of deviation from normality .the p value computations are listed in table [ gamma ] .gamma distribution shows the largest p - values among the given distributions for the different empirical permeability equations .hence , based on the information in table [ gamma ] , we conclude that the gamma distribution provides the best fit .furthermore , it is shown in table [ gamma ] the permeability covs of all empirical equations range between 0.43 and 0.74 , which is about 8 times larger than the fibre volume fraction cov of 0.086 .this suggests that permeability is subjected to larger uncertainty than fibre volume fraction .an adequate representation of microstructural variability of fibre arrangement in fibre - reinforced composites is of critical importance for the analysis of the flow in the fibrous media . the distribution of fibre volume fraction was quantified by the measurement of areal weight density and areal thickness from optical images of tows in a twill carbon - epoxy composite .then , the pdf of the permeability was determined by a known pdf of and assuming a constant .to do so , we proposed a method to determine the probability density function of the permeability of porous media .we employed the kozeny - carmen equation and combined it with the change of variable technique .our results suggest that the relationship between the local areal weight density and thickness is well approximated by a bivariate normal distribution . distribution of local fibre volume fraction exhibits a bell - shaped curve and fit well to a normal distribution model . assuming constant , a gamma distribution could more accurately describe the variation in permeability data . + as conclusion , the understanding of the probability distribution of permeability is still taking further clarification but that the hypothesis of normality has been refuted .sys thanks the capes pnpd - ufscar foundation for the financial support in the year 2014 - 5 .sys thanks the federal university of sao carlos , department of statistics , for hospitality in 2014 - 5 .carmen p. fluid flow through a granular bed ._ transactions of the institution of chemical engineers _ 1937 ; 15:150167 .wong c , long a. modelling variation of textile fabric permeability at mesoscopic scale ._ plast rubber compos _ 2006 ; 35(3):101111 .mesogitis ts , skordos aa , long ac . uncertainty in the manufacturing fibrous thermosetting composites : a review ._ composite part a : applied science and manufacturing _ 2014 ; 57:67 - 75 .bodaghi m , gonalves ct , correia nc . a quantitative evaluation of the uncertainty of permeability measurements in constant thickness fibre reinforcement ._ in : proceedings of eccm-16 conference .june , 2014 .arbter r , et al .experimental determination of the permeability of textiles : a permeability benchmark exercise ._ composite part a : applied science and manufacturing _ 2011 ; 42:1157 - 1168 .vernet n , et al. experimental determination of the permeability of engineering textiles : permeability benchmark ii . _composite part a : applied science and manufacturing _ 2014 ; 61:172 - 184 .pan r , liang z , zhang c , wang b. statistical characterization of fibre permeability for composite manufacturing ._ polymer composites _ 2000 ; 21(6 ) : 996 - 1006 .hoes k , dinescu d , sol h , vanheule m , parnas rs , luo y , verpoest i. new set up for measurement of permeability properties of fibrous reinforcements for rtm ._ composite part a : applied science and manufacturing _ 2002 ; 33:959 - 969 .li j , zhang c , liang z , wang b. stochastic simulation based approach for statistical analysis and characterization of composites manufacturing processes . _journal of manufacturing systems _ 2006 ; 25(2 ) : 108 - 121 .endruweit a , long ac , robitaille f , rudd cd .influence of stochastic fibre angle variations on the permeability of bi - directional textile fabrics ._ composite part a : applied science and manufacturing _ 2006 ; 37:122 - 132 .wong cc . modelling the effects of textile preform architecture on permeability ._ phd thesis , university of nottingham , nottingham , england _ 2006 .padmanabhan sk , pitchumani r. stochastic modelling of nonisothermal flow during resin transfer moulding ._ international journal of heat and mass transfer _ 1999 ; 42:3057 - 3070 .zahng f , cosson b , comas - cardiba s , binetruy c. efficient stochastic simulation approach for rtm process with random fibrous permeability ._ composites science and technology _ 2011 , 71:1478 - 1485 .zahng f , comas - cardiba s , binetruy c. statistical modelling of in - plane permeability of non - woven random fibrous reinforcement ._ composites science and technology _ 2012;72(12):136879 .parseval y , roy rv , advani sg .effect of local variations of preform permeability on the average permeability during resin transfer molding of composites ._ antec95 _ 1995;2:3040 - 3044 .gebart b.r.permeability of unidirectional reinforcements for rtm. _ journal of composite materials _ 1992 , 26:1100 - 1133 .sahraoui m , kaviany m. slip and no - slip boundary conditions at interface of porous , plain media ._ journal of heat and mass transfer _ 1992 , 35:927 - 94 .bruschke m. , advani s.g .flow of generalized newtonian fluids across a periodic array of cylinders . _ journal of rheology _ 1993 , 37:479 - 498 .westhuizen j.v.d , plessis j.p.d .an attempt to quantify fibre bed permeability utilizing the phase average navier - stokes equation . _ composite part a : applied science and manufacturing _ 1996 , 27a:263 - 269 .lee s.l , yang j.h .modeling of darcy forchheimer drag for fluid flow across a bank of circular cylinders ._ international journal of heat and mass transfer _ 1997 40:3149 - 3155 .catalanotti g , bodaghi m , correia n. on the statistics of transverse permeability of randomly distributed fibres ._ submitted to journal of science and technology _ 2015 ; under review .yu b. analysis of flow in fractal porous media . _ applied mechanics reviews _ 2008 , 61 : 050801 - 1 - 19 .haldar a , mahadevan s. probability , reliability , and statistical methods in engineering design . _ ohn wiley , new york _ 2000 .stuart a , ord k. _ kendall s advanced theory of statistics _ 1998 ; arnold , london , 6 edition , vol 1 : p. 351 .elandt - johnson r c , johnson n l. _ survival models and data analysis _ 1980 ; john wiley and sons ny , p. 69 .hinkley d v. on the ratio of two correlated normal random variables . _ biometrica _ 1969 ; 56 ( 3 ) : 635 - 639 .cedilnik a , komelj k , blejec a. ratio of two random variables : a note on the existence of its moments . _metodoloki zvezki _ 2006 ; 3 ( 1 ) : 1 - 7 .xie y , jie q. a new efficient ellipse detection method ._ ieee _ 2002 ; 1051 : 4651 - 4652 .basca c.a , talos m , brad r.randomized hough transform for ellipse detection with result clustering . _ ieee _ 2005 ; 1397 - 1400 .dodson c , oba y , sampson w. on the distributions of mass , thickness and density in paper ._ appita j _ 2001,54:385 - 389 .team , r.d.c .r : a language and environment for statistical computing . _vienna , austria : r foundation for statistical computing _ 2010 .liu q , parnas r.s , giffard h.s .new set - up for in - plane permeability measurment _composite part a : applied science and manufacturing _ 2007 , 38:954 - 962 .gebart b.r.permeability of unidirectional reinforcements for rtm ._ journal of composite materials _ 1992 , 26:1100 - 1133 .bruschke b.r , advani s.g .flow of generalized newtonian fluids across a periodic array of cylinders ._ _ journal of rheology__1993 , 37:479 - 498 .gutowski t.g , cai z , bauer s , boucher d , kingery j , wineman s. consolidation experiments for laminate composites._journal of composite materials _ 1987 , 21:650 - 669 .happel j. viscous flow relative to arrays of cylinders ._ alche _ 1959 , 5:174 - 177 .lee s.l , yang j.h .modeling of dracy - forchheimer drag for fluid flow across a bank of circular clinders ._ international journal of heat and mass transferr _ 1997 , 40:3149 - 3155 .sharaoui m , kaviany m. slip and no - slip boundary conditions at interface of porous , plain media ._ international journal of heat and mass transferr _ 1992 , 35:927 - 943 .johnson , norman l. ; kotz , samuel ; balakrishnan , narayanaswamy.continuous univariate distributions ._ wiley _ 1994,1:173 .e.b.wilson , m.m.hilferty.the distribution of chi - square ._ proceedings of the nationa academy of sciences _ 1931,17:684 - 688 .p.zhang , p.x.k.song , a.qu , t.greene .effiecient estimation for patient - specific rates of disease progression using nonnormal linear mixed models ._ biometrics _2008,64:29 - 38 . | permeability of fibrous porous media at the micro / meso scale - level is subject to significant uncertainty due to the heterogeneity of the fibrous media . the local microscopic heterogeneity and spatial variability porosity , tortuosity and fibre diameter affect the experimental measurements of permeability at macroscopic level . this means that the selection of an appropriate probability density function ( pdf ) is of crucial importance , in the characterization of both local variations at the microscale and the equivalent permeability at the experimental level ( macroscale ) . this study addresses the issue of whether or not a normal distribution appropriately represents permeability variations . to do so , ( i ) the distribution of local fibre volume fraction for each tow is experimentaly determined by estimation of each pair of local areal density and thickness , ( ii ) the kozeny - carmen equation together with the change of variable technique are used to compute the pdf of permeability , ( iii ) using the local values of fibre volume fraction , the distribution of local average permeability is computed and subsequently the goodness of fit of the computed pdf is compared with the distribution of the permeability at microscale level . finally variability of local permeability at the microscale level is determined . + the first set of results reveals that ( 1 ) the relationship between the local areal density and local thickness in a woven carbon - epoxy composite is modelled by a bivariate normal distribution , ( 2 ) while fibre volume fraction follows a normal distribution , permeability follows a gamma distribution , ( 3 ) this work also shows that there is significant agreement between the analytical approach and the simulation results . the second set of results shows that the coefficient of variation of permeability is one order of magnitude larger than that of fibre volume fraction . future work will consider other variables , such as type of fabrics , the degree of fibre preform compaction to determine whether or not the bivariate normal model is applicable for a broad range of fabrics . 1 truecm .5 truecm |
data management systems , like database , information retrieval ( ir ) , information extraction ( ie ) or learning systems , store , organize , index , retrieve and rank information units , like tuples , objects , documents , items .a wide range of applications of these systems have emerged that require the management of uncertain or imprecise data .important examples of data are sensor data , webpages , newswires , imprecise attribute values . what is common to all these applications is uncertainty and then that they have to deal with decision and statistical inference .ranking is perhaps the most crucial task performed by the data management systems which have to deal with uncertainty . in many applications , ranking aims at deciding or inferring , for example ,the class assigned to a unit or the order by relevance , usefulness , or utility of the units delivered to another application or to an end user .in addition , ranking is performed to decide whether a unit is placed at a given rank .the management of imprecise data require means for ranking information units by probability .ranking places information units in a list ordered by a measure of utility , cost , relevance , etc .. a probability theory measures the uncertainty of the decision . to this end , the definition of an event space and the estimation of probabilities are necessary steps for representing imprecise data and making predictions within many contexts of data management like machine learning , information retrieval or probabilistic databases .the measurement of the imprecision and the uncertainty in the data leads to the definition of regions of acceptance of a predefined set of hypotheses , thus bringing many decision problems to the calculation of a probability of detection and of a probability of false alarm .although the data management systems reach good results thanks to classical probability theory and parameter tuning , ranking is far from being perfect because useless units are often ranked at the top or useful units are missed .classical probability theory describes events and probability distributions using sets and set measures , respectively , according to kolmogorov s axioms .in contrast , quantum probability theory describes events and probability distributions using hermitian operators in the complex hilbert vector space .whereas parameter tuning is performed within a fixed probability theory , the adoption of quantum probability entails a radical change . furthermore , whereas classical probability is based on sets such that the regions of acceptance or rejection are set - based detectors ( i.e. , indicator functions ) , quantum probability is based on subspace - based detectors and the detectors are projector - based .note that the use of quantum probability does not imply that quantum phenomena are investigated in the paper ; we are interested in the formalism based the hilbert vector spaces instead .the main question asked in the paper is whether further improvement may be obtained if the classical probability theory is replaced by the quantum probability theory .the paper shows that ranking information units by quantum probability yields different outcomes which are in principle more effective than ranking them by classical probability given the same data available for parameter estimation .the effectiveness is measured in terms of probability of detection ( also known as recall or power ) and probability of false alarm ( also known as fallout or size ) .we structure the paper as follows .section [ sec : class - prob - quant ] illustrates the basics of the probability theory through a view that encompasses both theories .section [ sec : quant - prob - detect ] compares quantum detection with classical detection .section [ sec : optim - rank - quant ] shows that the ranking by quantum probability more effective than the ranking by classical probability .section [ sec : interpr - quant - proj ] provides an interpretation of the projectors which define the regions of acceptance and rejection .section [ sec : impl - rank ] describes the algorithm for ranking information units by quantum probability .section [ sec : related - work ] provides an overview of the related work . onto . ]in this section , we introduce a special view of probability distributions for the classical theory of probability .the same view is also introduced for quantum probability , which is a non - classical theory and does not admit the distributive law , to provide a general framework for quantum and classical probabilities ; the view is depicted in figure [ fig : correspondence ] . before introducing the view of probability theory ,some basic definitions are provided .a probability space is a set of mutually exclusive events such that each event is assigned a probability between and and the sum of the probabilities over the set of events is .for the sake of clarity , we introduce the case of binary event spaces because it is the simplest and most common in data management keyword occurrence in webpages , binary features in sample records or binary attribute values in relational tables are some examples .the case of binary event spaces are usually represented by mutually exclusive scalars like and .if binary scalars are used , the mutual exclusiveness is given by the scalar product , for example , ( see ) . whereas the scalars is a possible representation of events , the vectors of a complex finite - dimensionalare another option .when using vectors , an event is and its complement is .the representation of the events must encode the mutual exclusiveness .if binary vectors are used , the mutual exclusiveness is given by the inner product , i.e. , .when the event space is not binary ( e.g. , when the events are represented by natural numbers ) , a binary representation can again be used .the vector is assigned to symbol , the vector is assigned to symbol , and so on until is assigned to symbol .whatever the representation is used , the inner product between two vectors must be and their norm must be .the mapping between the probabilities and the events is called `` probability distribution '' which is a function mapping a mathematical object which represents an event to a real number ranging between and . the difference between classical probability and quantum probability ; the difference is due to the way the event space and the probability distribution are represented .the starting point of the view of probability used in the paper is the algebraic form of the probability space . to this end , hermitian ( or self - adjoint ) linear operators are used . in quantum mechanics , `` operator '' is preferred to `` matrix '' yet in the paper , for the sake of clarity , `` matrix '' is preferred because , for a fixed basis , the matrices are isomorphic to the operators .a matrix is hermitian when it is equal to its conjugate transpose .hermitian matrices are important because their eigenvalues are always real .in particular , hermitian matrices with trace is the key notion in quantum probability because the sum of the eigenvalues is and , thus , the eigenvalues can be viewed as a probability distribution .the projector is an idempotent hermitian matrix .every subspace has one projector and then the projectors are 1:1 correspondence with the subspaces .each vector corresponds to one projector with rank one defined as the outer product of the vector by its conjugate transpose .there are two main instructions for representing events using projectors : * the projectors must be mutually orthogonal for representing the mutual exclusiveness of the events , and * the projectors must have trace for making probability calculation consistent with the probability axioms . an event space and a probability function defined over it are represented using hermitian matrices with trace . in particular , a projector represents an event and an event space is modeled by a collection of projectors . as the union of the events results in the whole event space , the sum of the projectors of a collection corresponding to an event space results in the unity . more specifically ,if is a collection of mutually orthogonal projectors , the latter being termed `` resolution to the unity '' .for example , using the dirac notation introduced in appendix [ sec : dirac - notation ] , the projector of two events are represented by 0 \end{array } \right ) \qquad { |0\rangle } = \left ( \begin{array}{c } 0 \\ [ 5pt ] 1 \end{array } \right)\ ] ] and 0 & 0 \end{array } \right ) \qquad { |0\rangle\langle0| } = \left ( \begin{array}{cc } 0 & 0 \\ [ 5pt ] 0 & 1 \end{array } \right)\ ] ] however , there is not a unique representation of an event space .for example , the following vectors are also representing mutually exclusive events : \frac{1}{\sqrt{2 } } \end{array } \right ) \qquad \left ( \begin{array}{r } \frac{1}{\sqrt{2 } } \\ [ 5pt ] -\frac{1}{\sqrt{2 } } \end{array } \right)\ ] ] thus leading to a different resolution to the unity given by the following projectors \frac{1}{2 } & \frac{1}{2 } \end{array } \right ) \qquad \left ( \begin{array}{rr } \frac{1}{2 } & -\frac{1}{2 } \\ [ 5pt ] -\frac{1}{2 } & \frac{1}{2 } \end{array } \right)\ ] ] the second kind of hermitian matrix of a probability space is the density matrix ; the density matrix encapsulates the probability values assigned to the events . in physics, a density matrix represents the _ state _ of a microscopic system , such as a particle , a photon , etc .. the structure of a microscopic system is unknown . yeta device can measure the system to obtain some information .a microscopic system is similar to a urn of colored balls .the internal composition of the urn is always unknown even if opened and observed because the device disturbs the state ( i.e. , the distribution of the colors ) of the urn . in data management and in other domains different from particle physics , a system is macroscopic instead .examples of macroscopic systems in data management are webpages , customers , queries , clicks , tuples , attributes , and so on .the states of these systems correspond to the probability densities according to which keywords , reviews , attribute values are observables to be measured from such systems .density matrices are a powerful formalism in the macroscopic worlds too because they allow us to introduce the algebraic approach adopted for incorporating the more powerful probability space and decision rule suggested in the paper . to the end of introducing the way density matricesare defined , consider two equiprobable events , e.g. , the occurrence of a feature or a positive / negative customer review .the probability distribution is where each value refers to an event . as an alternative to a list, the probability distribution can be arranged along the diagonal of a two - dimensional matrix and the other matrix elements are zeros .for example , the matrix corresponding to the probability distribution of two equally probable events is 0 & \frac{1}{2 } \end{array } \right)\ ] ] in general , the probability distribution of a -event space can be written as 0 & p_2 & \cdots & 0 \\ [ 5pt ] \vdots & \vdots & \ddots & \vdots\\ [ 5pt ] 0 & 0 & \cdots & p_k \\ [ 5pt ] \end{array } \right)\ ] ] a probability distribution is _ pure _ when the density matrix is a projector , otherwise , the distribution is _mixed_. a distribution is mixed when the density matrix is a mixture of density matrices ; a pure distribution is an instance of mixture with one matrix .the density matrix representing a pure distribution is 1:1 correspondence with a density vector such that the projector is the outer product between the vector and its conjugate transpose .a classical probability distribution is pure when the probability is concentrated on a single elementary event which is the certain event and then has probability . given a density matrix , the spectral theorem helps find the underlying events and the related probabilities .because of the importance of the spectral theorem , we provide its definition below : [ sec : class - prob - quant-1 ] to every hermitian matrix on a finite - dimensional complex inner product space there correspond real numbers and rank - one projectors so that the s are pairwise distinct , the are mutually orthogonal , , and . see .the eigenvalues are the spectrum and are the projectors of the spectrum of . from theorem [ sec : class - prob - quant-1 ] , thus , a pure distribution is always a rank - one projector .the spectral theorem says that any hermitian matrix corresponding to a distribution can be decomposed as a linear combination of projectors ( i.e. pure distributions ) where the eigenvalues are the probability values associated to the events represented by the projectors .the eigenvalues are real because the decomposed matrix is hermitian , are non - negative and sum to . for example, when the matrix corresponding to the distribution of two equally probable events is considered , the spectral theorem says that 0 & \frac{1}{2 } \end{array } \right ) = \frac{1}{2 } \left ( \begin{array}{cc } 1 & 0 \\ [ 5pt ] 0 & 0 \end{array } \right ) + \frac{1}{2 } \left ( \begin{array}{cc } 0 & 0 \\ [ 5pt ] 0 & 1 \end{array } \right)\ ] ] a mixed distribution have more non - zero eigenvalues , a pure distribution has a single eigenvalue . in classical probability , every pure distribution represented by a diagonal density matrix corresponds to a projector .however , in general , a density matrix is not necessarily diagonal , yet the matrix is necessarily hermitian .for example , are trace - one projectors and correspond to pure distributions , thus there is a certain event ( with probability ) and an impossible event ( with probability , of course ) . yet , they are not diagonal .when , for example , keyword occurrence in webpages is represented , the first projector may be assigned eigenvalue and the other is assigned eigenvalue .thus the former represents the certain event and the latter represents the impossible event in the probability space .when using the algebraic form to represent probability spaces , the function for computing a probability is the trace of the matrix obtained by multiplying the density matrix by the projector corresponding to the event .the usual notation for the probability of the event represented by projector when the distribution is represented by density matrix is also known as born s rule .for example , when 0 & \frac{1}{2 } \end{array } \right ) \qquad \mathbf{e } = \left ( \begin{array}{cc } 1 & 0 \\ [ 5pt ] 0 & 0 \end{array } \right)\ ] ] the probability is 0 & \frac{1}{2 } \end{array } \right ) \left ( \begin{array}{cc } 1 & 0 \\ [ 5pt ] 0 & 0 \end{array } \right ) \right ) = { \mathrm{tr}}\left ( \left ( \begin{array}{cc } \frac{1}{2 } & 0 \\ [ 5pt ] 0 & 0 \end{array } \right ) \right ) = \frac{1}{2}\ ] ] when is a rank - one projector , the trace - based probability function can be written as when is a rank - one projector , then from the example , the definition of a function that computes the probability of an event when the probability is already allocated in the diagonal of the density matrix may be odd .however , we have shown that not all the density matrices corresponding to a distribution need to be diagonal matrices and the diagonal elements do not necessarily correspond to probability values , although they do have to sum to .a density matrix encapsulate the values assigned to the events by a probability function because of gleason s theorem stated below and proved in . to every probability distribution on the set of all projectors in a complex vector space with dimension greater than corresponds a unique density matrix on the same vector space for which the probability of the event represented by a projector is for every unit vector in the vector space .basically , the theorem tells us that corresponding to a probability distribution is one density matrix such that the probability of any event represented as a projector is calculated by the trace function . the probability of an event when computed using a mixture differs from the probability computed using a pure state , yet they share the classical probability term whereas the difference is called _ interference term_. using a mixture , 0 & |a_0|^2 \end{array } \right)\ ] ] using superposition , where and is the angle of the polar representation of the complex number .suppose , as an example , that represents the event `` the keyword occurs '' and the density matrix represents the probability distribution of keyword occurrence in useful webpages .the common factor ( i.e. ) is the sum of two probabilities ; the probability that the webpage is not useful ( ) multiplied by the probability that the keyword occurs in a useless webpage ( ) , and the probability that the webpage is useful ( ) multiplied by the probability that the keyword occurs in a useful webpage ( ) .the sum is nothing but an application of the law of total probability .the quantity is the interference term .as the interference term ranges between and , the probability of keyword occurrence computed when usefulness is superposed with uselessness becomes different from the common factor in which usefulness and uselessness are mutually exclusive and their probability distribution is described by a mixture .the interference term can be so large that the law of total probability is violated and any probability space obeying kolmogorov s axioms can not admit the probability values and , thus requiring the adoption of a quantum probability space .in general , the information stored in the data is acquired and delivered through information unit representation and ranking , these processes are described in terms of decision and estimation , and they are therefore affected by error . the error could be eliminated only if precise and exhaustive methodological tools and computer systems were developed .nevertheless , there is a trade - off between precision , exhaustivity and the computation cost because high level of the former can be achieved only if a high computation cost is devoted .thus , a certain amount of error is unavoidable yet can be controlled and limited below a given threshold . either a set of statements , or hypotheses , must be decided to best describe the information unit insofar as data permit to judge ( e.g. , the best topic(s ) to which a webpage is assigned ) , or the values of certain quantities ( also known as parameters ) characterizing the information unit must be estimated , the probability of detection and the probability of false alarm related to a decision must be calculated . in the paper , a great deal of attention is paid to decision whereas estimation is set apart not because estimation is little important , but because estimation would require another research stream had it to be addressed to the appropriate level of exhaustivity .many tasks in data management are decision problems , examples are the classification of images with respect to predefined patterns , the categorization of webpages to topics , contextual advertising ( i.e. , the decision whether an ad has to displayed in a search engine result page ) , the retrieval and ranking of webpages ( i.e. , the decision as to whether a webpage has to put at rank of a search engine result page ) , probabilistic databases ( i.e. , the decision about the correct value of an attribute and then the computation of the associated probability ) .our illustration of decision theory is necessarily brief and confined to its simplest aspects and examples .the illustration is also organized in such a way as to bring out most clearly the parallels between classical probability - based decision and quantum probability - based decision .the examples are chosen from elementary information retrieval or machine learning theory and perhaps provide a basis for comparison with the quantum case .a certain information unit ( e.g. , a webpage or an store item ) is observed in such a way as to obtain numbers ( e.g. , the pagerank or the number of positive reviews ) on the basis of which a decision has to be made about its state. the numbers observed are , for example , the frequency of a feature in the information unit , the simplest example being the frequency of a keyword in a webpage used for calculating search engine statistical ranking functions . for the sake of clarity, we use the binary frequency and the feature presence / absence case in the paper .the state might be , for example , the relevance of the webpage to the search engine user s interests or the customer s willingness to buy the store item .the use of the term `` state '' is not coincidental because the numbers are observed depending upon the density matrix , which is indeed the mathematical notion implementing the state of a system .thus , quantum probability ascribes the decision about the state of an information unit to testing the hypothesis that the density matrix has generated the observed numbers . consider the hypothesis that the state of the system is the density matrix and the alternative hypothesis that the state of the system is the density matrix .the two hypotheses can be labeled and , respectively . in data management ,hypothesis asserts , for example , that a customer does not buy an item or that a webpage shall be irrelevant to the search engine user whereas hypothesis asserts that an item shall be bought by a customer or that a webpage shall be relevant to the user .therefore , the probability that , say , a feature occurs in an item which shall not be bought by a customer or a keyword occurs in a webpage which shall be irrelevant to the search engine user depends on the state ( i.e. , the density matrix ) .statistical decision theory is a old topic and neyman - pearson s lemma is by now one out of the most important results which provides a criterion for deciding upon hypotheses instead of the bayesian approach .the lemma provides the rule to govern the decider s behaviour and decide the true hypothesis without hoping to know whether it is true .given an information unit and an hypothesis about the unit , such a rule calculates a specified number ( e.g. , a feature ) and , if the number is greater than a threshold reject the hypothesis , otherwise , accept it .such a rule tells nothing whether , say , the item shall be bought by the customer , but the lemma proves that , if the rule is followed , then , in the long run , the hypothesis shall be accepted at the highest probability of detection ( or power ) possible when the probability of false alarm ( or size ) is not higher than a threshold .the set of the pairs given by size and power is the power curve which is also known as the receiver operating characteristic ( roc ) curve .neyman - pearson s lemma implies that the set of the observable numbers ( e.g. , features ) can be partitioned into two distinct regions ; one region includes all the numbers for which the hypothesis shall be accepted and is termed acceptance region , the other region includes all the numbers for which the hypothesis shall be rejected and is termed rejection region . for example , if a keyword is observed from webpages and only presence / absence is observed , the set of the observable numbers is and each region is one out of possible subsets , i.e. , .the paper reformulates neyman - pearson s lemma in terms of subspaces instead of subsets to utilize quantum probability .therefore , the region of acceptance and the region of rejection must be defined in terms subspaces . in the following ,we illustrate the algorithm for calculating the most efficient test in hilbert spaces .the following result holds : [ the : helstrom ] let be the density matrices under , respectively .the region of acceptance at the highest power at every size is given by the projectors of the spectrum of whose eigenvalues are positive .see .an optimal projector is a projector which identifies the region of acceptance and the region of rejection according to theorem [ the : helstrom ] .we define the _ discriminant function _ as where is a projector . if the discriminant function is positive , the observed event represented by is placed in the region of acceptance .suppose that the density matrix that corresponds to is a mixed , classical probability distribution .the mixed case is the usual method for dealing with uncertainty in data management , even though more than one feature may exist or the feature may not be binary ; however , the number of features or the number of values of a feature is not essential in the paper .let be such a mixed distribution and 0 & 1-p_1 \end{array } \right)\ ] ] where 0 & 0 \end{array } \right ) \qquad \mathbf{p}_0 = \left ( \begin{array}{cc } 0 & 0 \\ [ 5pt ] 0 & 1 \end{array } \right)\ ] ] similarly , 0 & 1-p_0 \end{array } \right)\ ] ] when is observed , the power and the size are , respectively , in the classical case , represents the absence and the presence , respectively , of a feature .hence , the possible acceptance or rejection regions are which correspond respectively to `` never accept '' , `` accept when the feature does not occur '' , `` accept when the feature occurs '' and `` always accept '' .thus , the decision on , say , webpage classification , topic categorization , item suggestion , can be made upon the occurrence of one or more features because represent `` physical '' events .furthermore , the discriminant function in the mixed case is the power curve can be built as follows .suppose , as an example , that a keyword describes webpage content and that that webpage either includes ( ) or does not include ( ) the keyword . only if and this point corresponds to the event represented by .let be the probability that the keyword occurs in a relevant webpage or in a non - relevant webpages , respectively .when the keyword is the unique observed feature , the webpage is presented to the user if and the keyword occurs , or and the keyword does not occur .the power curve includes the points and .the key point is that a mixture is not the unique way to implement the probability distributions .as we illustrate in section [ sec : class - prob - quant ] , the superposed vectors \sqrt{1-p_1 } \end{array } \right ) \qquad { |\varphi_0\rangle } = \left ( \begin{array}{c } \sqrt{p_0}\\ [ 5pt ] \sqrt{1-p_0 } \end{array } \right)\ ] ] yield the pure densities \sqrt{p_1(1-p_1 ) } & 1-p_1 \end{array } \right ) = { |\varphi_1\rangle\langle\varphi_1|}\ ] ] \sqrt{p_0(1-p_0 ) } & 1-p_0 \end{array } \right ) = { |\varphi_0\rangle\langle\varphi_0|}\ ] ] which replace the mixed densities .theorem [ the : helstrom ] instructs us to define the optimal projectors as those of the spectrum of whose eigenvalues are positive , the spectrum being where the s are eigenvalues , and ( see ) . is the distance between densities defined in ; is the squared cosine of the angle between the subspaces corresponding to the density vectors .the justification of viewing as a distance comes from the fact that `` the angle in a hilbert space is the only measure between subspaces , up to a costant factor , which is invariant under all unitary transformations , that is , under all possible time evolutions . '' are the optimal projectors in the pure case and are the optimal projectors in the mixed case .the probability of detection ( i.e. , the power ) and the probability of false alarm ( i.e. , the size ) in the pure case are defined as follows : finally , can be defined as function of : 1 & { |x|^2 } < q_0 \leq 1 \end{array } \right.\ ] ] so that the power curve is obtained ( see ) .expressions and have no counterpart in classical probability and are among the essential points of the paper because they allow us to improve ranking yet using the same amount of evidence as the evidence used in the classical probability distribution and .at this point , there are three main issues : * the numerical difference between the classical and the quantum probabilities of detection at every given probability of false alarm , * the interpretations of and and whether the interpretations can be tied together , * how the optimal projectors in the pure case can be used for ranking information units in a data management system .the issues are addressed in section [ sec : optim - rank - quant ] , [ sec : interpr - quant - proj ] and [ sec : impl - rank ] , respectively .the following lemma shows that the power of the decision rule in quantum probability is greater than , or equal to , the power of the decision rule in classical probability with the same amount of information available from the training set to estimate .[ sec : optim - rank - quant-2 ] at every given false alarm probability .the equality holds only if : 0 \end{array } \right ) = p_1 = { \mathrm{tr}}(\mu_1\mathbf{p}_1)\ ] ] + 0 \end{array } \right ) = p_0 = { \mathrm{tr}}(\mu_0\mathbf{p}_1)\ ] ] let be a certain false alarm probability and let be the real , continuous functions yielding the detection probabilities at . admits the first and the second derivatives in the range ] . is a continuous function .consider the polynomial of order passing through the points and at which intersects .then , the lagrange interpolation theorem can be used so that the latter being non negative because and .the number ] , hence , ] . the power can plotted against the size , thus producing the power curve of the classical decision rule and the power curve of the quantum decision .a graphical representation is provided in figure [ fig : roc ] .is the curve above the polygonal curve depicting .the classical probability roc curve intercepts the quantum probability roc curve at , and , where are observed . for and for all s . ][ sec : optim - proj - quant ] suppose that ten information units have been used for training a data management system .each unit has been indexed using one binary feature and has been marked as useful ( 1 ) or useless ( 0 ) .the training set is summarized by table [ tab : example ] : + therefore , the computation of follows from and the computation of follows from . when , we have that .in this section , some interpretations of the optimal projectors representing the region of acceptance are provided .the optimal projectors in the pure case have a more difficult interpretation than because the latter represent `` physical '' observations ( e.g. , a customer review does exist or does not ) whereas can not be expressed in terms of and we can not explain the s by saying that , for example , they represent the presence and/or the absence of a feature . in quantum theory , the impossibility of expressing a projectors as functions of other projectors is termed incompatibility which is expressed mathematically as .the interpretation of the optimal projectors reflects on the interpretation of what means that they are `` observed '' in an information unit ; for example , if the information unit is a commercial item suggested to a customer , what does the `` observation '' of mean ?what should we observe from an information unit so that the observation outcome corresponds to the projector ?the question is not a futile because the answer(s ) would effect the algorithms ( e.g. , automatic indexing ) used for representing the informative content of the unit .specifically , the interpretation of provides what the retrieval algorithm must do when a feature is observed .either the interpretation of an optimal projector is implemented at indexing time or at query time , an internal memory representation in terms of data structures is necessary for automatic processing and the representation needs the observation of physical properties which are then converted into numbers . in the mixed case ,the answer is quite straightforward because the optimal projectors correspond to the feature occurrence and separate the units indexed by the feature from those not indexed . in the pure case , the answer is not straightforward at all . if represent feature occurrence , the s can not be a feature occurrence , but they are something new which can not be described in convential way .indicate the angles between the vectors . ]geometrically , each vector is a superposition of other two independent vectors .figure [ fig : geometry ] depicts the way the vectors and the spanned subspaces ( i.e. , projectors ) interact and shows that are placed symmetrically `` around '' the density vectors and the probability of error is minimized .the observation of a binary feature places the observer upon either or and there is no way to move upon or .probabilistically , the optimal projectors and the density vectors are related as follows : where logically , the projectors are assertions , thus a parallel can be established with assertions and subsets an assertion defines the elements of the universe ( e.g. , an event space ) which belong to a subset .the basic difference between subspaces and subsets is that the vectors belong to a subspace if and only if they are spanned by a basis of the subspace .a containment relationship can be established between subspaces such that if a subspace ( e.g. , a line ) includes a point , then every subspace ( e.g. , a plane ) containing the line includes the point too .the subspace spanned by the projector is termed as and the containment relationship between and can be encoded as such that for every vector , ( * ? ? ?* ch . 5 ) .the paper considers the information units relevant , useful or interesting when they are included by the subspace spanned by .suppose that a subspace is given and that is the metric defined on the whole space to which belongs . see . as , maximizes the probability note that when is in , is the distance between the s defined in .the result establishes a connection between the geometric , probabilistic and logical interpretation of the projectors even though they seems different . in the next section, these interpretations are tied together , thus allowing us to look for a criterion for assigning an information unit to the best region as explained .the problem is to decide whether an information unit represented by a binary feature is considered relevant , useful , interesting , etc .. an algorithm implementing such a decision rule should perform as follows .it reads the feature occurrence symbol ( i.e. , either or ) ; check whether the feature is included by the region of acceptance . if the feature is not included , the hypothesis of relevance , usefulness , interest , etc . is rejected .another view of the preceding decision rule is the ranking of the information units .when ranking information units , the system returns the units whose features lead to the highest probability of detection , then those whose features lead to the second highest probability of detection , and so on . when a binary feature is considered , the ranking ends up to placing the units whose features lead to the highest probability of detection whereas the other units are not retrieved . as we point out in section [ sec : interpr - quant - proj ] , the observation of features corresponding to can not give any information about the observation of the events corresponding to due to the incompatibility between these pairs of events .thus , we can not design an algorithm implementing the decision rule so that the observation of a feature can be translated into the observation of the events corresponding to .a possible approach can be based on the probabilistic interpretation of the optimal projectors .according to such an approach , the probability that the event corresponding to an optimal projector occurs provided that a feature occurs can provide a measure of the degree to which the event had occurred if it could have been observed .when the subspaces represent these events , the probability that the event corresponding to an optimal projector is observed if the state is described by is .such an approach is partially satisfactory because . as an alternative approach ,consider the geometric interpretation depicted in figure [ fig : geometry ] .note that the asymmetry of with respect to the densities causes the suboptimality of .indeed , if coincided with , the power and the size would be the same in both cases .we propose a method which reaches the optimality without neither resorting to probabilistic approximations nor undergoing high computational costs .the density vectors are a superposition of both and which are two different bases and induce different coordinates .when is the basis , the coordinates of are . when is the basis , the coordinates of are such that and as is often assumed when ranking information units , the coordinates have a quite simple and intuitive meaning provided by the following expressions : where . as the asymmetry of with respectthe density vectors is due to which summarize the statistics observed from the training set , we leverage them for improving the ranking . in particular , in the paperwe show that changing the estimation of is sufficient to reach the optimality .then , we wonder how we should define the density vectors or matrices so that were obtained instead of .the basis vectors ( i.e. , or ) are rotated , thus changing the coordinates .therefore , we define the density vectors in according to in such a way that if a feature is observed under , the probability of detection is and , if a feature is observed under , the probability of false alarm is .the simple solution is defining the new density vectors as follows : thus obtaining at first sight , the increase of the probability of detection is due to the higher probability values assigned to the region of acceptance in the pure case than those assigned to the region of acceptance in the mixed case , and not to a different ranking . in the following , we show that the superiority of the discriminant function in the pure case is due to the different ranking induced by a different partition of the event space into region of acceptance and region of rejection .we state the problem as follows .are there that the region of acceptance in the pure case differs from that in the mixed case ?consider theorem [ the : helstrom ] to answer the question .the region of acceptance in the mixed case is defined through table [ tab : region - of - acceptance - mixed ] whereas the region of acceptance in the pure case is defined through table [ tab : region - of - acceptance - pure ] .furthermore , the discriminant function derived from is where .the regions of acceptance corresponding to the sign of the eigenvalues of the spectrum of the discriminant function in the mixed case .the equality case is addressed in . [ cols="<,^,^ " , ] suppose that .thus , and the region of acceptance in the mixed case is represented by , whereas the region of acceptance in the pure case is represented by .the counter - example just mentioned proves the following [ sec : impl - optim - rank ] the discriminant function ranks information units in a different way from the discriminant function because an alternative ranking is computed . in the densities that are consideredare those associated to a mixed state , while in the densities are the one associated to a pure state .so the equations look like the same , what differs is the type of densities that are used in the two cases .we have shown that the improvement of ranking measured in terms of probability of detection given a probability of false alarm is due to the ranking induced by . hence ,we state the problem of finding into the problem of defining the coordinates of the representation of the density vectors in .the problem of defining the new coordinates for the density vectors might be viewed as a problem of feature weighting .in such a context , the traditional estimations of the probability of feature occurrence ( i.e. , ) under two different hypothesis are replaced by the s .feature re - weighting is explored in ir whose state - of - the - art is given by the bm25 weighting scheme surveyed , for example , in .the main drawback of the weighting schemes like bm25 is the parameter tuning necessary for optimizing the effectiveness with a given database or query , thus making the understanding of how and why a scheme is more effective than others rather problematic .in contrast , the paper illustrate the decision rule in such a way that if the decision rule is followed , then shall be accepted when it is true at a higher probability of detection than when the probability of false alarm is not more than a given threshold .the formulation of the decision rule provided in this section allows us to design an efficient algorithm for indexing and retrieving ( or classifying ) information units .the algorithm is just an instance of those employed currently in ir ( see for example ) where the maximum likelihood or bayesian estimations of are replaced by and .the foundations of quantum mechanics and theory has been illustrated in plenty of books such as and .quantum probability , for example , has been introduced in . in particular ,the interference term is addressed in .the view of probability illustrated in section [ sec : class - prob - quant ] is based on .the utilization of quantum theory in computation , information processing and communication is described in .recently , investigations have started in other research areas , for example , in ir .the paper is inspired by helstrom s book which provides the foundations and the main results in quantum detection ; an example of the exploitation of the results in quantum detection is reported within communication theory .this paper links to as far it concerns density matrices and projectors ; however , the paper develops quantum detection for data management .this paper departs from the probability ranking principle ( pr ) proposed in the context of classical probability ; we propose quantum probability to improve ranking in a principled way . in information retrieval ,the probability ranking principle ( prp ) states that `` if a reference retrieval system s response to each request is a ranking of the documents in the collection in order of decreasing probability of relevance to the user who submitted the request , where the probabilities are estimated as accurately as possible on the basis of whatever data have been made available to the system for this purpose , the overall effectiveness of the system to its user will be the best that is obtainable on the basis of those data . '' . however , some assumptions undermine the general applicability of the prp .we state a similar principle yet replace classical probability , which is implied in , with quantum probability parameter estimation data are kept the same , bu we instead use subspaces to define alternative regions of acceptance and rejection . to our knowledge, the use of quantum probability for ranking information units has not yet been addressed in the same way of this paper although a few papers that are somehow comparable can be found .perhaps , the closest paper is . that paper proposes to rank documents by quantum probability and suggests that interference ( which must be estimated ) might model dependencies in relevance judgements such that documents ranked until position interfere with the degree of relevance of the document ranked at position means , the optimal order of documents under the prp differs from that of the quantum prp .note that they empirically show that quantum probability is more effective that classical one in specific rankings tasks .in contrast , in this paper , we do not need to address interference because quantum probability can be estimated using the same data used to estimate classical probability .we rather show that not only ranking by quantum probability provides a different optimal ranking , it is also more effective than classical probability . with this regard , the effectiveness of quantum probability measured in stems from the estimation of classical probability and that of interference .but , the regions of acceptance and rejection are still based on sets .it follows that the optimality of the quantum prp strongly depends on the optimality of the prp and on the interference estimantion effectiveness . in this paper , on the contrary , ranking optimality only depends on the region of acceptance defined upon subspaces .another paper somewhat related to ours is .the authors discuss how to emply quantum formalisms for encompassing various information retrieval tasks within a single framework . from an experimental point of view, what that paper demonstrates is that ranking functions based on quantum formalism are computationally feasible .the best experimental results of rankings driven by quantum formalism are comparable to bm25 , that is , to prp , thus limiting the contribution within a classical probability framework .probabilistic databases systems manage imprecise data and provide tools for structured complex queries .a survey is provided in . beside scalability and query plan execution ,these systems do probabilistic inference which may be defined upon classical or quantum probability and they concentrate on top-_k _ query answering where the tuples are assigned a probability distribution . the results of this paper may be applied to probabilistic databases systems too .the main result of the paper is the demonstration that quantum probability can be incorporated into a data management system for ranking information units .as ranking by quantum probability is more effective than ranking by classical probability when it has been used in other domains , it is our belief that an analogous improvement can be achieved within data management .the future developments are threefold .first , we will work on the intepretation of the optimal projectors in the pure case because the detection of them in an information unit may open further insights .second , feature classical correlation and quantum entanglement will be investigated .third , evaluation is crucial to understand whether the results of the paper can be confirmed by the experiments . 10 l. accardi . on the probabilistic roots of the quantum mechanical paradoxes . in s.diner and l. de broglie , editors , _ the wave - particle dualism _ , pages 297330 .d. reidel pub .co. , 1984 .l. accardi . .il saggiatore , 1997 . in italian .l. accardi and a. fedullo . on the statistical meaning of complex numbers in quantum mechanics ., 34(7):161172 , june 1982 .g. boole . .walton and maberly , 1854 .p. bruza , d. sofge , w.f .lawless , c.j .van rijsbergen , and m. klusch , editors . ,volume 5494 of _ lecture notes in computer science _ ,saarbrcken , germany , 2009 .springer .g. cariolaro and g. pierobon .performance of quantum data transmission systems in the presence of thermal noise ., 58:623630 , february 2010 .croft , d. metzler , and t. strohman . .addison wesley , 2009 .n. dalvi , c. r , and d. suciu .probabilistic databases : diamonds in the dirt . ,52:8694 , july 2009 .r. b. griffiths . .cambridge university press , 2002 .undergraduate texts in mathematics .springer , 1987 .helstrom . .academic press , 1976 .harvard university press , 1989 .kolmogorov . .chelsea publishing company , new york , second edition , 1956 .m. melucci . a basis for information retrieval in context ., 26(3 ) , 2008 .j. neyman and e.s .pearson . on the problem of the most efficient tests of statistical hypotheses ., 231:289337 , 1933 .m. nielsen and i.l .cambridge university press , 2000 .parthasarathy . .birkhuser , 1992 .b. piwowarski , i. frommholz , m. lalmas , and k. van rijsbergen .what can quantum theory bring to information retrieval ? in _ proc .19th international conference on information and knowledge management _, page 5968 , 2010 .e. rieffel .certainty and uncertainty in quantum information processing . in _ proceedings of the quantum interaction symposium _ , 2007 .the probability ranking principle in information retrieval ., 33(4):294304 , 1977 .robertson and h. zaragoza .the probabilistic relevance framework : bm25 and beyond ., 3(4):333389 , 2009 .keith van rijsbergen . .cambridge university press , uk , 2004 .j. von neumann . .princeton university press , 1955 .w. k. wootters . statistical distance and hilbert space . , 23(2):357362 , jan 1981 .g. zuccon , l. azzopardi , and c.j .van rijsbergen .the quantum probability ranking principle for information retrieval . in _ proceedings of the international conference on the theory of information retrieval ( ictir ) _ , pages 232240 , 2009 .a complex vector is represented as and is called `` ket '' .the conjugate transpose of is represented as and is called `` bra '' ( therefore , the dirac notation is called the bra(c)ket notation ) .moreover , and the properties of trace allow us to write . in general ,if are trace-1 hermitian operators , and is a projector , is the probability that the event represented by occurs given a density operator . | data management systems , like database , information extraction , information retrieval or learning systems , store , organize , index , retrieve and rank information units , such as tuples , objects , documents , items to match a pattern ( e.g. classes and profiles ) or meet a requirement ( e.g. , relevance , usefulness and utility ) . to this end , these systems rank information units by probability to decide whether an information unit matches a pattern or meets a requirement . classical probability theory represents events as sets and probability as set measures . thus , distributive and total probability laws are admitted . quantum probability is a non - classical theory nor does admit distributive and total probability laws . although ranking by probability is far from being perfect , it is optimal thanks to statistical decision theory and parameter tuning . the main question asked in the paper is whether further improvement over the optimality provided by probability may be obtained if the classical probability theory is replaced by quantum probability theory . whereas classical probability ( and detection theory ) is based on sets such that the regions of acceptance / rejection are set - based detectors , quantum probability is based on subspace - based detectors . the paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation . as probability of detection ( also known as recall or power ) and probability of false alarm ( also known as fallout or size ) measure the quality of ranking , we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of false alarm and the same parameter estimation data . as quantum probability provided more effective detectors than classical probability within other domains that data management , we conjencture that , the system that can implement subspace - based detectors shall be more effective than a system which implements a set - based detectors , the effectiveness being calculated as expected recall estimated over the probability of detection and expected fallout estimated over the probability of false alarm . [ 1]*proof of # 1 * |
the benefits of cooperative work were explored by nature well before the advent of the human species as attested by the collective structures built by slime molds and social insects . in the human context, the socio - cognitive niche hypothesis purports that hominin evolution relies so heavily on social elements to the point of viewing a band of hunter gatherers as a ` group - level predator ' . whether physical or social , those structures and organizations may be thought of as the organisms solutions to the problems that endanger their existence ( see , e.g. , ) and have motivated the introduction of the concept of social intelligence in the scientific milieu . in the computer science circle ,that concept prompted the proposal of optimization heuristics based on social interaction , such as the popular particle swarm optimization algorithm and the perhaps lesser - known adaptive culture heuristic . despite the prevalence of the notion that cooperation can aid a group of agents to solve problems more efficiently than if those agents worked in isolation , and of the success of the social interaction heuristics in producing near optimal solutions to a variety of combinatorial optimization problems , we know little about the conditions that make cooperative work more efficient than independent work .in particular , we note that since cooperation ( or communication , in general ) reduces the diversity or heterogeneity of the group of agents , it may actually end up reducing the efficiency of group work .efficiency here means that the time to solve a problem scales superlinearly with the number of individuals or resources employed in the task . in this contribution we study the performance of a group of cooperative agents following a first - principle research strategy to study cooperative problem solving set forth by huberman in the 1990s that consists of tackling easy tasks , endowing the agents with simple random trial - and - test search tools , and using plain protocols of collaboration .here the task is to find the global maxima of three realizations of the nk - fitness landscape characterized by different degrees of ruggedness ( i.e. , values of the parameter ) .we use a group of m agents which , in addition to the ability to perform individual trial - and - test searches , can imitate a model agent the best performing agent at the trial with a probability .hence our model exhibits the two critical ingredients of a collective brain according to bloom : imitative learning and a dynamic hierarchy among the agents .the model exhibits also the key feature of distributed cooperative problem solving systems , namely , the exchange of messages between agents informing each other on their partial success ( i.e. , their fitness at the current trial ) towards the completion of the goal .we find that the presence of local maxima in the fitness landscape introduces a complex trade - off between the computational cost to solve the task and the group size .in particular , for a fixed frequency of imitation , there is an optimal value of at which the computational cost is minimized .this finding leads to the conjecture that the efficacy of imitative learning could be a factor determinant of the group size of social animals .our study departs from the vast literature on cooperation that followed robert axelrod s 1984 seminal book the evolution of cooperation since in that game theoretical approach it is usually assumed a priori that mutual cooperation is the most rewarding strategy to the individuals . on the other hand , here we consider a problem solving scenario and a specific cooperation mechanism ( imitation ) aiming at determining in which conditions cooperation is more efficient than independent work .the rest of the paper is organized as follows . in section [ sec : nk ] we offer a brief overview of the nk model of rugged fitness landscapes . in section [ sec : imi ] we present a detailed account of the imitative learning search strategy and in section [ sec : res ] we study its performance on the task of finding the global maxima of three realizations of the nk landscape with and ruggedness parameters , and .the rather small size of the solution space ( binary strings of length ) allows the full exploration of the space of parameters and , in particular , the study of the regime where the group size is much greater than the solution space size .finally , section [ sec : conc ] is reserved to our concluding remarks .the nk model of rugged fitness landscapes introduced by kauffman offers the ideal framework to test the potential of imitative learning in solving optimization problems of varied degrees of difficulty , since the ruggedness of the landscape can be tuned by changing the two integer parameters and of the model . roughly speaking, the parameter determines the size of the solution space whereas the value of influences the number of local maxima and minima on the landscape .the solution space consists of the distinct binary strings of length , with . to each stringwe associate a fitness value which is an average of the contributions from each component in the string , i.e. , where is the contribution of component to the fitness of string .it is assumed that depends on the state as well as on the states of the right neighbors of , i.e. , with the arithmetic in the subscripts done modulo .in addition , the functions are distinct real - valued functions on . as usual , we assign to each a uniformly distributed random number in the unit interval .hence has a unique global maximum . for are no local maxima and the sole maximum of is easily located by picking for each component the state if or the state , otherwise . for , finding the global maximum of the nk model is a np - complete problem , which essentially means that the time required to solve the problem using any currently known deterministic algorithm increases exponentially fast as the size of the problem grows .the increase of the parameter from to decreases the correlation between the fitness of neighboring configurations ( i.e. , configurations that differ at a single component ) in the solution space and for those fitness values are uncorrelated so the nk model reduces to the random energy model . and ( _ circles _ ) , ( _ triangles _ ) and ( _ inverted triangles _ ). the lines are guides to the eye.,scaledwidth=48.0% ] to illustrate the effect of varying on the ruggedness of the fitness landscape , in fig .[ fig:1 ] we show the relative fitness of a string as function of its hamming distance to the global maximum for and different values of . here stands for the fitness of the global maximum . for each , the figure shows the results for a single realization of the fitness landscape and for a single trajectory in the solution space that begins at the maximum ( ) and changes the state components sequentially until all states are reversed ( ) .we note that the ruggedness of the landscapes ( essentially , the number of local maxima ) can vary wildly between landscapes characterized by the same values of and and so can the performance of any search heuristic based on the local correlations of the fitness landscape . hence in order to appreciate the role of the parameters that are relevant to imitative learning , namely , the group size and the imitation probability , for fixed and we consider a single realization of the nk fitness landscape .we assume a group or system composed of agents .each agent operates in an initial binary string drawn at random with equal probability for the digits and . in the typical situationthat the size of the solution space is much greater than the group size we can consider those initial strings as distinct binary strings , but here we will consider the case that as well , so that many copies of a same string are likely to appear in this initial stage of the simulation .in addition , we assume that the agents operate in parallel . at any trial , each agent can choose with a certain probability between two distinct processes to operate on the strings .the first process , which happens with probability , is the elementary or minimal move in the solution space , which consists of picking a string bit at random with equal probability and then flipping it .the repeated application of this operation is capable of generating all the binary strings starting from any arbitrary string . the second process , which happens with probability , is the imitation of a model string .we choose the model string as the highest fitness string in the group at trial . the string to be updated ( target string ) is compared with the model string and the different digits are singled out. then the agent selects at random one of the distinct bits and flips it so that the target string becomes now more similar to the model string .the parameter ] is shown in fig .[ new:2 ] for different string lengths .( recall that at this stage . ) in particular for we find that implies . for very large ,the probability of success becomes .we note that although eq .( [ geom1 ] ) is valid strictly for large only , the fact that is typically on the order of makes this geometric distribution an exceedingly good approximation to the correct distribution of absorbing times .now let us consider the case of agents searching independently for the global maximum .since the process halts when one of the agents finds the global maximum , the halting time is where and are independent random variables distributed by the geometric distribution ( [ geom1 ] ) .it is easy to show that the distribution of is also geometric with probability of success . in the case of agents the halting time and sinceboth and are geometrically distributed , though with distinct success probabilities , we find that is also geometrically distributed with probability of success .the generalization of this reasoning to agents yields that the mean scaled cost is }.\ ] ] since is close to we can write so that $ ] for and for .we recall that for we have . as expected , eq .( [ cind ] ) matches the simulation data very well ( see fig .[ fig:2 ] ) . as function of the group size for the imitation probability ( _ circles _ ) , ( _ triangles _ ) , ( _ inverted triangles _ ) , and ( _ squares _ ) .the solid curve is eq .( [ cind ] ) and the dashed line is the linear function .the parameters of the nk landscape are and .the landscape exhibits a single maximum ., scaledwidth=48.0% ] the nk fitness landscape with exhibits a single global maximum and no local maxima .the results of the performance of the imitative search for a landscape with and are summarized in fig .[ fig:2 ] .as shown in the previous subsection , the mean computational cost for non - interacting agents ( ) is a constant provided the group size is not too large compared to , which is close to the size of the solution space , . in the casethe group size is very large , the agents begin to duplicate each other s work leading to a linear increase of with increasing .more pointedly , in this regime we find ( see the straight line in fig .[ fig:2 ] ) .we stress that a constant computational cost means that the time the group requires to find the global maximum decreases with the inverse of the group size ( i.e. , the time to solve the problem decreases linearly with the number of agents ) , whereas a computational cost that grows linearly with the group size means that does not change as more agents are added to the group . allowing the agents to imitate the best performer ( model ) at each trialleads to a rapid reduction of the computational cost provided the group size remains small .the best performance is achieved for and and corresponds to a fiftyfold decrease of the computational cost with respect to the independent search ( ) .the fact that the best performance is obtained when the imitation probability is maximum is due to the absence of local maxima in the landscape for .we recall that for only the model string , which is likely to be represented by several copies in the group , can perform the elementary move ; all other strings must imitate the model . as a result , the effective size of the search space is greatly reduced , i.e. , the strings are concentrated in the vicinity of the model string which can not accommodate many more than strings without duplication of work .this is the reason we observe the degradation of the performance when the group size increases beyond its optimal value .note that for the imitative learning search always performs better than the independent search .now we consider a somewhat more complex nk fitness landscape by setting and . in the particular realization of the landscapestudied here there are 5 maxima in total , among which 4 are local maxima .the relative fitness of those maxima ( ) , as well as their hamming distances to the global maximum ( ) , is presented in table [ table : k2 ] ..local maxima for the studied realization of the nk fitness landscape with and [ cols="^,^,^ " , ] [ table : k2 ] the results of the imitative learning search are summarized in fig .[ fig:3 ] where the mean rescaled computational cost is plotted against the group size for different values of the imitation probability .the performance of the independent search ( ) is identical to that shown in fig .[ fig:2 ] for , since that search strategy is not affected by the complexity of the landscape .the results for the cooperative search ( ) , however , reveal a trade - off between the group size and the imitation probability .in particular , for we observe a steep increase of the computational cost for intermediate values of .this happens because the group can be trapped near one of the local maxima . for large groups ( in this case ), chances are that some of the initial strings are close to the global maximum and end up attracting the rest of the group to its neighborhood , thus attenuating the effect of the local maxima .the robust finding is that for any there is an optimal group size , which depends on the value of , that minimizes the computational cost of the search . as function of the group size for the imitation probability ( _ triangles _ ) , ( _ inverted triangles _ ) , ( _ squares _ ) and ( _ circles _ ) .the solid line is the linear function .the parameters of the nk landscape are and .the landscape exhibits 4 local maxima and a single global maximum.,scaledwidth=48.0% ] agents and probability of imitation .the thin horizontal lines indicate the relative fitness of the 4 local maxima given in table [ table : k2 ] .the parameters of the nk landscape are and .,scaledwidth=48.0% ] a better understanding of the dynamics of the search is offered in fig .[ fig:4 ] which shows the relative fitness of the model string as function of the number of trials for and .the role of the local maxima as transitory attractors of the search is evident in this figure . a typical run with agents , which is approximately the location of the maximum of in fig . [ fig:3 ] , yields essentially a flat line at the highest fitness local maximum ( maximum 4 in table [ table : k2 ] ) and a sudden jump to the global maximum .because there are many copies of the model string ( the mean hamming distance between the strings is less than 1 ) , the variants produced by the elementary move are attracted back to the local maximum .this is the reason why a small group can search the solution space much more efficiently than a large one in the case imitation is very frequent .the particular realization of the nk fitness landscape with and that we consider now has 53 maxima , including the global maximum , which poses a substantial challenge to any hill - climbing type of search strategy . as in the previous case , we observe in fig .[ fig:5 ] a trade - off between and , but now the results reveal how imitative learning may produce disastrous performances on rugged landscapes for certain ranges of those parameters . the strategy of following or imitating a model string can backfire if the fitness landscape exhibits high fitness local maxima that are distant from the global maximum .a large group may never escape from those maxima due to the attractive effect of the clones of the model string .this is what we observed in the case of for which we were unable to find the global maximum with groups of size .as function of the group size for the imitation probability ( _ circles _ ) , ( _ squares _ ) , ( _ triangles _ ) and ( _ inverted triangles _ ) .for we find for .the solid line is the linear function .the parameters of the nk landscape are and .the landscape exhibits 52 local maxima and a single global maximum.,scaledwidth=48.0% ] as function of the probability of imitation for the group size ( solid line ) , ( dashed line ) and ( dotted line ). the parameters of the nk landscape are and .the landscape exhibits 52 local maxima and a single global maximum.,scaledwidth=48.0% ] we note , however , that for a fixed group size it is always possible to tune the imitation probability so that the imitative learning strategy performs better than ( or , in a worst - case scenario , equal to ) the independent search .this point is illustrated in fig .[ fig:6 ] that shows the computational cost as function of . for any fixed value of , the computational cost exhibits a well - defined minimum that determines the value of the optimal imitation frequency .as hinted in the previous figures , this optimal value decreases with increasing group size . in order to verify the robustness of our findings ,which were obtained for specific realizations of the nk fitness landscape , we have considered four random realizations of the landscape with and in addition to the realization studied above .the comparison between the mean computational costs to find the global maxima of the five realizations of the landscape is shown in fig .[ fig:8 ] for the imitation probability .the results are qualitatively the same , despite the wild fluctuations of in the regime where the search is trapped in the local maxima .it is reassuring to note that the initial decrease of the mean cost with increasing group size and the existence of an optimal group size that minimizes that cost , which are exhibited by all five realizations of the nk landscapes shown in fig .[ fig:8 ] , are robust properties of the imitative learning search . for the probability of imitation as function of the group size for five realizations ( different symbols ) of the nk fitness landscape with and .the solid line is the linear function ., scaledwidth=48.0% ]in this paper we study quantitatively the problem solving performance of a group of agents capable to learn by imitation .the performance measure we consider is the computational cost to find the global maximum of three specific realizations of the nk fitness landscape .the computational cost is defined as the product of the number of agents in the group and the number of attempted trials till some agent finds the global maximum .our main conclusion , namely , that for a fixed probability of imitation there is a value of group size that minimizes the computational cost corroborates the findings of a similar study in which the task was to solve a particular cryptarithmetic problem . hence our conjecture that the efficacy of imitative learning could be a factor determinant of the group size of social animals ( see for a discussion on the standard selective pressures on group size in nature ) .we note that in the case the connectivity between agents is variable , i.e. , each agent interacts with distinct randomly picked agent ( here we have focused on the fully connected network only ) then there is an optimal connectivity value that minimizes the computational cost .it would be most interesting to understand how the network topology influences the performance of the group of imitative agents and how the optimal network topology correlates with the known animal social networks .the main aim of our contribution is to show that the existence of an optimal group size that maximizes performance for imitative learning is insensitive to the choice of the fitness landscape , so it is likely a robust property of populations that use imitation as a cooperative strategy .although we have focused on the effect of the parameter , which roughly determines the ruggedness of the nk landscape , the parameter ( the length of the strings ) also plays a relevant role on the search for the global maximum , besides the obvious role of fixing the size of the search space .( of course , since the typical time to find the global maximum scales with , even moderate values of , say , would render the simulations unfeasible . )the nontrivial role is that the value of seems to pose an upper bound to the optimal size of the group in the regime that imitation is very frequent ( see figs .[ fig:2 ] , [ fig:3 ] and [ fig:5 ] ) .this is so because in that regime the strings are concentrated in the close vicinity of the model string , which can not accommodate more than different strings .we do not purport to offer here any novel method to explore efficiently rugged landscapes , but the finding that for small group sizes imitative learning decreases considerably the computational cost of the search , even in a very rugged landscape ( see data for in fig . [ fig:5 ] ) motivates us to address the question whether in such landscapes that strategy could achieve a better - than - random performance for all group sizes .this is achieved automatically for smooth landscapes ( see figs .[ fig:2 ] and [ fig:3 ] ) but not for rugged ones ( see fig . [ fig:5 ] and ) .clearly , the way to accomplish this aim is to decrease the probability of imitation as the group size increases , following the location of the minima shown in fig .[ fig:6 ] .it is interesting to note that the finding that too frequent interactions between agents may harm the performance of the group ( see fig .[ fig:6 ] ) may offer a theoretical justification for henry ford s factory design in which the communication between workers was minimized in order to maintain the established efficiency and maximal productivity . | the idea that a group of cooperating agents can solve problems more efficiently than when those agents work independently is hardly controversial , despite our obliviousness of the conditions that make cooperation a successful problem solving strategy . here we investigate the performance of a group of agents in locating the global maxima of nk fitness landscapes with varying degrees of ruggedness . cooperation is taken into account through imitative learning and the broadcasting of messages informing on the fitness of each agent . we find a trade - off between the group size and the frequency of imitation : for rugged landscapes , too much imitation or too large a group yield a performance poorer than that of independent agents . by decreasing the diversity of the group , imitative learning may lead to duplication of work and hence to a decrease of its effective size . however , when the parameters are set to optimal values the cooperative group substantially outperforms the independent agents . |
in a recent paper , onatski , moreira and hallin ( 2011 ) ( hereafter omh ) analyze the asymptotic power of statistical tests in the detection of a signal in spherical real - valued gaussian data as the dimensionality of the data and the number of observations diverge to infinity at the same rate .this paper generalizes omh s alternative of a single symmetry - breaking direction ( _ single - spiked _ alternative ) to the alternative of multiple symmetry - breaking directions ( _ multispiked _ alternative ) , which is more relevant for applied work .contemporary tests of sphericity in a high - dimensional environment ( see ledoit and wolf ( 2002 ) , srivastava ( 2005 ) , schott ( 2006 ) , bai et al . (2009 ) , chen et al .( 2010 ) , and cai and ma ( 2012 ) ) consider general alternatives to the null of sphericity .our interest in alternatives with only a few contaminating signals stems from the fact that in many applications , such as speech recognition , macroeconomics , finance , wireless communication , genetics , physics of mixture , and statistical learning , a few latent variables typically explain a large portion of the variation in high - dimensional data ( see baik and silverstein ( 2006 ) for references ) . as a possible explanation of this fact , johnstone ( 2001 ) introduces the spiked covariance model , where all eigenvalues of the population covariance matrix of high - dimensional data are equal except for a small fixed number of distinct spike eigenvalues .the alternative to the null of sphericity considered in this paper coincides with johnstone s model .the extension from the single - spiked alternatives of omh to the multi - spiked alternatives considered here , however , is all but straightforward .the difficulty arises because the extension of the main technical tool in omh ( lemma 2 ) , which analyzes high - dimensional spherical integrals , to integrals over high - dimensional real stiefel manifolds obtained in onatski ( 2012 ) is not easily amenable to the laplace approximation method used in omh .therefore , in this paper , we develop a completely different technique , inspired from the large deviation analysis of spherical integrals by guionnet and maida ( 2005 ) .let us describe the setting and main results in more detail .suppose that the data consist of independent observations of a -dimensional gaussian vector with mean zero and positive definite covariance matrix .let where is the -dimensional identity matrix , is a scalar , an diagonal matrix with elements along the diagonal , and a -dimensional parameter normalized so that .we are interested in the asymptotic power of tests of the null hypothesis against the alternative for some based on the eigenvalues of the sample covariance matrix of the data when so that with , an asymptotic regime which we abbreviate into .the matrix is an unspecified nuisance parameter , the columns of which indicate the directions of the perturbations of sphericity .we consider the cases of specified and unspecified for the sake of simplicity , in the rest of this introduction , we only discuss the case of specified although the case of unspecified is more realistic . denoting by the -th largest sample covariance eigenvalue ,let where .we begin our analysis with a study of the asymptotic properties of the likelihood ratio process ^{r}\right\ } , ] converges weakly to a gaussian process ^{r}\right\ } ] and autocovariance function that convergence entails the weak convergence , in the le cam sense , of the -indexed statistical experiments under which the eigenvalues are observed , i.e. the statistical experiments with log - likelihood process ^{r}\right\ } ] where is the standard normal distribution function and . as we explain in the paper ,this asymptotic power envelope is valid not only for the -based tests , but also for all tests that are invariant under left orthogonal transformations of the data .next , we consider previously proposed tests of sphericity and of the equality of the population covariance matrix to a given matrix .we focus on the tests studied in ledoit and wolf ( 2002 ) , bai et al ( 2009 ) , and cai and ma ( 2012 ) .we find that , in general , the asymptotic powers of those tests are substantially lower than the corresponding asymptotic power envelope value .in contrast , our computations for the case show that the asymptotic powers of the - and -based likelihood ratio tests are close to the power envelope .the rest of the paper is organized as follows .section 2 establishes the weak convergence of the log - likelihood ratio process to a gaussian process .section 3 provides an analysis of the asymptotic powers of various sphericity tests , derives the asymptotic power envelope , and proves its validity for general invariant tests .section 4 concludes .all proofs are given in the appendix .let be a matrix with independent gaussian columns .let be the ordered eigenvalues of and write where .similarly , let and . as explained in the introduction ,our goal is to study the asymptotic power , as of the eigenvalue - based tests of against for some , where are the diagonal elements of the diagonal matrix .if is specified , the model is invariant with respect to left and right orthogonal transformations ; sufficiency and invariance arguments ( see appendix 5.4 for details ) lead to considering tests based on only .if is unspecified , the model is invariant with respect to left and right orthogonal transformations and multiplications by non - zero scalars ; sufficiency and invariance arguments ( see appendix 5.4 ) lead to considering tests based on only .note that the distribution of does not depend on whereas , if is specified , we can always normalize dividing it by therefore , we henceforth assume without loss of generality that .let us denote the joint density of at as , and that of at as .we have depends only on and ; ; is a diagonal matrix ; is the set of all orthogonal matrices ; and is the invariant measure on the orthogonal group , normalized to make the total measure unity .formula ( [ common complex1 ] ) is a special case of the density given in james ( 1964 , p.483 ) for and follows from theorems 2 and 6 in uhlig ( 1994 ) for .let and let note that the jacobian of the coordinate change from to is changing variables in ( common complex1 ) and integrating out , we obtain is a diagonal matrix . consider the likelihood ratios and . formulae ( [ common complex1 ] ) and ( [ common complex2 ] ) imply the following proposition .[ proposition1]_let _ _ be the set of all _ _ _ orthogonal matrices .denote by _ _ _ _ the invariant measure on the orthogonal group _ _ _ normalized to make the total measure unity . put _ and let be the diagonal matrix where . _ _then,__ in the special case where the rank of the matrix equals one , and the integrals over the orthogonal group in ( [ lr1 ] ) and ( [ lr2 ] ) can be rewritten as integrals over a -dimensional sphere .omh show how such spherical integrals can be represented in the form of contour integrals , and apply laplace approximation to these contour integrals to establish the asymptotic properties of and in the case , the integrals in ( [ lr1 ] ) and ( [ lr2 ] ) can be rewritten as integrals over a stiefel manifold , the set of all orthonormal -frames _ _ in .onatski ( 2012 ) obtains a generalization of the contour integral representation from spherical integrals to integrals over stiefel manifolds .unfortunately , the laplace approximation method does not straightforwardly extend to that generalization , and we therefore propose an alternative method of analysis . the second - order asymptotic behavior , as goes to infinity , of integrals of the form was analyzed in guionnet and maida ( 2005 ) ( theorem 3 ) for the particular case where is a fixed matrix of rank one , a deterministic matrix , and under the condition that the empirical distribution of s eigenvalues converges to a distribution function with bounded support .below , we extend guionnet and maida s approach to cases where has rank larger than one , and to the stochastic setting of this paper .we then use such an extension to derive the asymptotic properties of and .let be the empirical distribution of , and denote by the marchenko - pastur distribution function , with density and , and a mass of at zero .as is well known , the difference between and weakly converges to zero a.s . as .moreover , and if , and if consider the hilbert transform of that transform is well defined for real outside the support of , that is , on the set using ( [ mp density ] ) , we get the sign of the square root is chosen to be the sign of .it is not hard to see that is strictly decreasing on .thus , on , we can define an inverse function , with values so - called -transform of takes the form for and sufficiently small , consider the subset of & \text{for } c\geq 1 , \\\left [ -\frac{1}{\sqrt{c}\left ( 1-\sqrt{c}\right ) } + \varepsilon , 0\right ) \cup \left ( 0,\frac{1}{\sqrt{c}\left ( 1+\sqrt{c}\right ) } -\varepsilon \right ] & \text{for } c<1.% \end{array}% \right.\]]from ( [ stijeltjes analytic ] ) , when , when , and when .therefore , with probability approaching one as [ proposition2]let be a sequence of random diagonal matrices where further , let , where is the -transform of the marchenko - pastur distribution .assume that , for some and with probability approaching one as .then, } \\ & & \times\prod_{j=1}^{r}\prod_{s=1}^{j}\sqrt{1 - 4\left ( \theta _ { pj}v_{pj}\right ) \left ( \theta _ { ps}v_{ps}\right ) c_{p}}\left ( 1+o(1)\right ) \text { a.s.},\end{aligned}\]]where is uniform over all sequences satisfying the assumption .this proposition extends theorem 3 of guionnet and maida ( 2005 ) to cases when depends on and is random .when and are fixed , it is straightforward to verify that where in guionnet and maida s ( 2005 ) theorem 3 , the expression should have been used instead of which is a typo .setting and in proposition [ proposition2 ] and using formula ( [ lr1 ] ) from proposition [ proposition1 ] gives us an expression for which is an equivalent of formula ( 4.1 ) in theorem 7 of omh .theorem [ theorem1 ] below uses proposition [ proposition2 ] to generalize theorem 7 of omh to the multispiked case .let and & \text{for } c>1 , \\\left [ -\sqrt{c}+\delta , 0\right ) \cup \left ( 0,\sqrt{c}-\delta \right ] & \text{for } c\leq 1.% \end{array}% \right .\label{h domain}\]]the condition for some implies that for some and sufficiently large .below , we are only interested in non - negative values of , and assume that ] _ _ be the space of real - valued continuous functions on _ _ ^{r} ] . _furthermore , _ _ and _ _ _ viewed as random elements of _ _ ^{r}] theorem [ theorem1 ] and le cam s first lemma ( van der vaart ( 1998 ) , p.88 ) imply that the joint distributions of ( as well as those of ) under the null and under the alternative are mutually contiguous for any . by applying le cam s third lemma ( van der vaart ( 1998 ) , p.90 ), we can study the local powers of tests detecting signals in noise .the requirement that be positive under alternatives corresponds to situations where the signals contained in the data are independent from the noise .if dependence between the signals and the noise is allowed , one might consider two - sided alternatives of the form for some .values of between and correspond to alternatives under which the noise variance is reduced along certain directions . in view of proposition [ proposition2 ], it should not be difficult to generalize theorem [ theorem1 ] to the case of fully ( all s ) or partially ( some s ) two - sided alternatives .this problem will not be discussed here , and is left for future research .denote by and respectively , the asymptotic powers of the asymptotically most powerful - and -based tests of size of the null against a point alternative with . as functions of and called the _ asymptotic powerenvelopes_. [ proposition3]_let _ _ _ denote the standard normal distribution function .then,__ \text { \textit{and } } \label{local power } \\ \beta _ { \mu } \left ( h\right ) \ ! & = & \!1\!-\!\phi \left [ \!\phi ^{-1}\left ( 1\!-\!\alpha \right ) \!-\!\sqrt{-\frac{1}{2}\sum_{i , j=1}^{r}\left ( \ln \left ( 1\!-\!\frac{h_{i}h_{j}}{c}\right ) \!+\!\frac{h_{i}h_{j}}{c}\right ) } \,% \right ] . \label{local power mu}\end{aligned}\ ] ] figure [ envelope_may2012 ] shows the asymptotic power envelopes and as functions of and when is two - dimensional .( upper panel ) and ( lower panel ) for as functions of .,width=384 ] it is important to realize that the asymptotic power envelopes derived in proposition [ proposition3 ] are valid not only for - and -based tests but also for any test invariant under left orthogonal transformations of the observations ( where is a orthogonal matrix ) , and for any test invariant under multiplication by any non - zero constant and left orthogonal transformations of the observations ( where and is a orthogonal matrix ) , respectively .let and denote the frobenius norm and the spectral norm , respectively , of a matrix .let be the null hypothesis and let be any of the following alternatives : for some or or or , where is a positive constant that may depend on and .[ proposition4 ] for specified , consider tests of against that are invariant with respect to the left orthogonal transformations of the data . ] as consider the following three distances between measures and the kolmogorov distance the wasserstein distance and the kantorovich distance as is well known ( see , for example , exercise 1 on p.425 of dudley ( 2002 ) ) , therefore , we have follows from theorem 1.1 of gtze and tikhomirov ( 2011 ) , there exists a constant such that for all .thus , a.s .. since is and a.s . , the result follows. let us denote the integral as . as explained in guionnet and maida ( 2005 , p.454 ) , we can write denotes the expectation conditional on and the -dimensional vectors are obtained from standard gaussian -dimensional vectors , independent from , by a schmidt orthogonalization procedure .more precisely , we have , where and in the spirit of the proof of guionnet and maida s ( 2005 ) theorem 3 , define stands for the classical kronecker symbol . as will be shown below , after an appropriate change of measure , and are asymptotically centered gaussian .expressing the exponent in ( [ first ip ] ) as a function of and changing the measure of integration , and using the asymptotic gaussianity will establish the proposition .let , where . using this notation , ( [ first ip ] ) , ( [ a definition ] ) , and ( gamma definition ) , we get , after some algebra, is the standard gaussian probability measure , and is a matrix with -th element next , define the event and are positive parameters to be specified later . somewhat abusing notation, we will also refer to as a rectangular region in that consists of vectors with odd coordinates in and even coordinates in .let denotes the indicator function .below , we establish the asymptotic behavior of as first and then and , diverge to infinity .we then show that the asymptotics of and coincide .consider infinite arrays of random centered gaussian measures and , there exists such that , for sufficiently large that and a.s .. therefore , still a.s ., for sufficiently large when and when hence , the measures are a.s . well defined for sufficiently large .whenever is not well defined , we re - define it arbitrarily .we have } j_{p}^{m , m^{\prime } } , \vspace{-0.2 in } \label{immnow}\]]where now show that , under , a.s .converges in distribution to a centered -dimensional gaussian vector , so that is asymptotically equivalent to an integral with respect to a gaussian measure on first , let us find the mean , and the variance of under measure .note that and .with probability one , for sufficiently large we have , by corollary 1 , is uniformly in , a.s .. that corollary 1 can be applied here follows from the form of expression ( [ k transform ] ) for .similarly, in , a.s .. thus, next , with probability one , for sufficiently large we have and . then , using corollary 1 , we get in .similarly , we have in , a.s .. a straightforward calculation , using formula ( [ k transform ] ) , shows that in , a.s . , where the matrix has elements\text{.}\vspace{% -0.25 in } \label{v22}\end{aligned}\]]this implies that is bounded away from zero and infinity for sufficiently large , uniformly over , a.s .. by construction , is a sum of independent random vectors having uniformly bounded third and fourth absolute moments under measure therefore , a central limit theorem applies .moreover , since the function is lipshitz over uniformly in theorem 13.3 of bhattacharya and rao ( 1976 ) , which describes the accuracy of the gaussian approximations to integrals of the form ( [ jmm_first ] ) in terms of the oscillation measures of the integrand , implies that denotes the gaussian distribution function with mean and variance and converges to zero uniformly in as a.s .. the rate of such a convergence may depend on the values of and note that , in as the difference converges to zero uniformly over where a convergence , together with ( [ egammatil ] ) , ( [ vargammatil ] ) , and ( [ jmm_second ] ) implies that .note that the difference converges to zero as , uniformly in for sufficiently large . on the other hand, } { 2\pi \!\sqrt{\det \left ( v_{p}^{(j , s)}\right ) } } \mathrm{d}y,% \vspace{-0.2 in } \label{mmintegral}\]]where ( [ v11]-[v22 ] ) , we verify that , for sufficiently large is a.s .positive definite , and, , uniformly in for sufficiently large ( [ immnow ] ) , ( [ jmm_third ] ) , and ( [ mmintegralfinal ] ) describe the behavior of for large and .let us now turn to the analysis of let be the event , and let explained in guionnet and maida ( 2005 , p.455 ) , are independent of therefore, denoting again by the centered standard gaussian measure on , we have and , using chebyshev s inequality , for and ( here we assume that ) , we get , we show that the same inequality holds when is replaced by and thus the same line of arguments yields inequalities ( [ subgaussgms ] ) and ( [ subgaussgmm ] ) imply that and therefore , for sufficiently large that } \left ( j_{p}^{m , m^{\prime } } + j_{p}^{m , m^{\prime } , \infty } \right ) , \vspace{-0.2 in } \label{ipm}\]]where will now derive an upper bound for from the definition of we see that there exist positive constants and which may depend on and , such that for any satisfying and for sufficiently large when holds, . clearly , .therefore, first assume denote as and as note that , under is a standard normal random variable .further , as long as for considered as a function of is continuous on for sufficiently large a.s .. hence , the empirical distribution of converges .moreover , and a.s . converge to finite real numbers .now , for such that we have sufficiently large a.s .. using this inequality , we get , for sufficiently large and any positive such that ( here we assume that and are such that satisfies the above requirements ) , we get by in the above derivations and combining the result with the above inequality , we get following a similar line of arguments , we obtain thus , for sufficiently large finally , combining ( [ imversusi ] ) , ( [ ipm ] ) , and ( [ jkk ] ) , we obtain for } \vspace{-0.2 in } \label{jp}\ ] ] the following upper and lower bounds: be an arbitrarily small number . equations ( [ jmm_third ] ) and ( [ mmintegralfinal ] ) imply that there exist and such that , for any and all sufficiently large let us choose and so that \sup_{\left\ { 2\theta _ { pj}\in \omega _ { \varepsilon \eta } , j\leq r\right\ } } \prod_{j=1}^{r}\prod_{s=1}^{j}\sqrt{1\!-\!4\left ( \!\theta _ { pj}v_{pj}\!\right ) \left ( \!\theta _ { ps}v_{ps}\!\right ) c_{p}}<\frac{\tau } { 4}\vspace{-0.2in}\]]for all sufficiently large a.s .. then , ( [ lastineq ] ) implies that all sufficiently large , a.s .. since can be chosen arbitrarily , we have , from ( [ jp ] ) and ( [ jpbound]), } \\ & & \times \left ( \prod_{j=1}^{r}\prod_{s=1}^{j}\sqrt{1\!-\!4\left ( \!\theta _ { pj}v_{pj}\!\right ) \left ( \!\theta _ { ps}v_{ps}\!\right ) c_{p}}+o(1)\right ) , \vspace{-0.2in}\end{aligned}\]]where as uniformly in a.s .. setting , we have , , and further , by lemma 11 and formula ( 3.3 ) of omh , for sufficiently large a.s .. with these auxiliary results , formula ( [ equivalence 1 ] ) is a straightforward consequence of ( [ lr1 ] ) and proposition 2 . in what follows , we omit the subscript in to simplify notation .note that is the integral appearing in expression ( [ lr2 ] ) for .let us now prove that , for some constant is uniform in ^{r}. ] for all and all sufficiently large a.s .. therefore , for all ^{r},\vspace{-0.2in} ] , a.s .. therefore , a.s ., for all sufficiently large , is the complementary incomplete gamma function ( see olver 1997 , p.45 ) with hence , for sufficiently large and we can continue , whenever and ( olver 1997 , p.70 ) . therefore , we have , for sufficiently large , this to ( [ i1bound ] ) , we see that can be chosen so that further , for sufficiently large , . therefore ,for any positive and sufficiently large and using stirling s approximation , we have , a.s., that this to ( [ i1bound ] ) , we see that can be chosen so that.s .. combining ( [ i2bound ] ) and ( [ i3bound ] ) , we get ( [ ip ] ) . now , letting note that there exist and such that \text { and } x\in \left [ p\!-\!\alpha \sqrt{p}% , p\!+\!\alpha \sqrt{p}\right ] \right\ } \subseteq \theta _ { \varepsilon \eta } ] and . ] further , consider the integral the domain of integration into segments , \left [ p\!-\!\alpha p^{\gamma } , p\!+\!\alpha p^{\gamma } \right ] ] where and denoting the corresponding integrals by and respectively , we have the laplace approximation , we have that dominates and and implies that hence , only constant and linear terms in the expansion of into power series of matter for the evaluation of let us find these terms . by corollary 1 , a.s .. using this fact , after some algebra , we get follows that } \label{last eq } \\ & & \times e^{\sum_{j=1}^{r}\theta _ { pj}v_{pj}\left ( x - s_{p}\right ) } \left ( \prod_{j=1}^{r}\prod_{s=1}^{j}\sqrt{1\!-\!4\left ( \!\theta _ { pj}v_{pj}\!\right ) \left ( \!\theta _ { ps}v_{ps}\!\right ) c_{p}}+o(1)\right ) \!\mathrm{d}x \notag \\ & = & \left ( 1+o(1)\right ) \prod_{j=1}^{r}\left ( 1+h_{j}\right ) ^{\frac{n_{p}}{2% } } l_{p}\!\left ( h;\lambda _ { p}\right ) \int_{p\!-\!\alpha \sqrt{p}% } ^{p\!+\!\alpha \sqrt{p}}\!\!\!x^{\frac{np}{2}-1}e^{-\frac{n}{2}% x}e^{\sum_{j=1}^{r}\theta _ { pj}v_{pj}\left ( x - s_{p}\right ) } \mathrm{d}x,% \vspace{-0.2 in } \notag\end{aligned}\]]where the last equality in ( [ last eq ] ) follows from ( [ lr1 ] ) and proposition 2 . the last equality in ( [ last eq ] ) , ( [ lr2 ] ) and the fact that that establishes ( [ equivalence 2 ] ) . the rest of the statements of theorem 1 follow from ( [ equivalence 1 ] ) , ( [ equivalence 2 ] ) , and lemmas 12 and a2 of omh. to save space , we only derive the asymptotic power envelope for the relatively more difficult case of real - valued data and -based tests . according to the neyman - pearson lemma , the most powerful test of against the simple alternative is the test which rejects the null when is larger than a critical value it follows from theorem 1 that , for such a test to have asymptotic size , must be , according to le cam s third lemma and theorem 1 , under the asymptotic power ( [ local power mu ] ) follows. before turning to the proof of proposition [ proposition4 ] , let us clarify the invariance issues in the problem under study . for basic definitions ( invariant , maximal invariant , etc . ), we refer to chapter 6 of lehmann and romano ( 2005 ) .suppose that is a random matrix with .this model is clearly invariant under the group acting on of left - hand multiplications by a orthogonal matrix so are the null hypothesis and the alternative letting the -tuple of non - zero eigenvalues of is clearly invariant under that group , since and share the same eigenvalues for any orthogonal matrix and any matrix however , is not maximal invariant for as and where is an arbitrary orthogonal matrix , share the same although , in general , there is no orthogonal matrix such that now , the joint density of the elements of is the factorization theorem , is a sufficient statistic , and it is legitimate to restrict attention to -measurable inference procedures .left - hand orthogonal multiplications of yields , for a transformation of the form when range over the family of orthogonal matrices , those transformations also form a group , say , now acting on the space of symmetric positive semidefinite real matrices of rank .clearly , is maximal invariant for as and share the same eigenvalues if and only if for some orthogonal matrix a similar reasoning applies in the case of unspecified with a larger group combining multiplication by an arbitrary non - zero constant with the left orthogonal transformations . sufficiency and invariance then lead to restricting attention to -measurable tests . with the same notation as above ,write for the sufficient statistic .consider an arbitrary invariant ( under the group of left orthogonal transformations of ) test , and define . then is a -measurable test with the same size and power function as it follows from the proof of theorem 6.5.3 ( i ) in lehmann and romano ( 2005 ) that is _ almost invariant_. moreover , since the conditions of lemma 6.5.1 ( same reference ) hold , this test is invariant under the group ( acting on ) . sincethe ordered -tuple of the eigenvalues of is maximal invariant for and since any invariant statistic is a measurable function of a maximal invariant one , must be -measurable .hence , is a -measurable test and has the same power function as as was to be shown .the existence of a -measurable test with the same power function as that of a test invariant under left orthogonal transformations and multiplication by non - zero constants is established similarly. | this paper deals with the local asymptotic structure , in the sense of le cam s asymptotic theory of statistical experiments , of the signal detection problem in high dimension . more precisely , we consider the problem of testing the null hypothesis of sphericity of a high - dimensional covariance matrix against an alternative of ( unspecified ) multiple symmetry - breaking directions ( _ multispiked _ alternatives ) . simple analytical expressions for the asymptotic power envelope and the asymptotic powers of previously proposed tests are derived . these asymptotic powers are shown to lie very substantially below the envelope , at least for relatively small values of the number of symmetry - breaking directions under the alternative . in contrast , the asymptotic power of the likelihood ratio test based on the eigenvalues of the sample covariance matrix is shown to be close to that envelope . these results extend to the case of multispiked alternatives the findings of an earlier study ( onatski , moreira and hallin , 2011 ) of the single - spiked case . the methods we are using here , however , are entirely new , as the laplace approximations considered in the single - spiked context do not extend to the multispiked case . key words : sphericity tests , large dimensionality , asymptotic power , spiked covariance , contiguity , power envelope . |
this paper concentrates on relationships of formal systems with biology . in particular , this is a study of different forms and formalisms for replication . in living systemsthere is an essential circularity that is the living structure .living systems produce themselves from themselves and the materials and energy of the environment .there is a strong contrast in how we avoid circularity in mathematics and how nature revels in biological circularity .one meeting point of biology and mathematics is knot theory and topology .this is no accident , since topology is indeed a controlled study of cycles and circularities in primarily geometrical systems . in this paperwe will discuss dna replication , logic and biology , the relationship of symbol and object , the emergence of form .it is in the replication of dna that the polarity ( yes / no , on / off , true / false ) of logic and the continuity of topology meet . herepolarities are literally fleshed out into the forms of life .we shall pay attention to the different contexts for the logical , from the mathematical to the biological to the quantum logical . in each casethere is a shift in the role of certain key concepts . in particular , we follow the notion of copying through these contexts and with it gain new insight into the role of replication in biology , in formal systems and in the quantum level ( where it does not exist ! ) . in the endwe arrive at a summary formalism , a chapter in _ boundary mathematics _( mathematics using directly the concept and notation of containers and delimiters of forms - compare and ) where there are not only containers , but also extainers entities open to interaction and distinguishing the space that they are not . in this formalismwe find a key for the articulation of diverse relationships .boundary algebra of containers and extainers _ is to biologic what boolean algebra is to classical logic .let and then and thus an extainer produces a container when it interacts with itself , and a container produces an extainer when it interacts with itself . the formalism of containers and extainers is a chapter in the foundations of a symbolic language for shape and interaction . with it, we can express the _ form _ of dna replication succinctly as follows : let the dna itself be represented as a container we regard the two brackets of the container as representatives for the two matched dna strands .we let the extainer represent the cellular environment with its supply of available base pairs ( here symbolized by the individual left and right brackets ) .then when the dna strands separate , they encounter the matching bases from the environment and become two dna s . life itself is about systems that search and learn and become .perhaps a little symbol like with the property that produces containers and retains its own integrity in conjunction with the autonomy of ( the dna ) could be a step toward bringing formalism to life .* acknowledgment . *the author thanks sofia lambropoulou for many useful conversations in the course of preparing this paper .the author also thanks sam lomonaco , john hearst , yuri magarshak , james flagg and william bricken for conversations related to the content of the present paper .most of this effort was sponsored by the defense advanced research projects agency ( darpa ) and air force research laboratory , air force materiel command , usaf , under agreement f30602 - 01 - 2 - 05022 .government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotations thereon .the views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies or endorsements , either expressed or implied , of the defense advanced research projects agency , the air force research laboratory , or the u.s .( copyright 2002 . )we start this essay with the question : during the replication of dna , how do the daughter dna duplexes avoid entanglement ? in the words of john hearst , we are in search of the mechanism for the emaculate segregation ". this question is inevitably involved with the topology of the dna , for the strands of the dna are interwound with one full turn for every ten base pairs . with the strandsso interlinked it would seem impossible for the daughter strands to separate from their parents .a key to this problem certainly lies in the existence of the topoisomerase enzymes that can change the linking number between the dna strands and also can change the linking number between two dna duplexes .it is however , a difficult matter at best to find in a tangled skein of rope the just right crossing changes that will unknot or unlink it .the topoisomerase enzymes do just this , changing crossings by grabbing a strand , breaking it and then rejoining it after the other strand has slipped through the break .random strand switching is an unlikely mechanism , and one is led to posit some intrinsic geometry that can promote the process . in is made a specific suggestion about this intrinsic geometry .it is suggested that in vivo the dna polymerase enzyme that promotes replication ( by creating loops of single stranded dna by opening the double stranded dna ) has sufficient rigidity not to allow the new loops to swivel and become entangled . in other words , it is posited that the replication loops remain simple in their topology so that the topoisomerase can act to promote the formation of the replication loops , and these loops once formed do not hinder the separation of the newly born duplexes .the model has been to some degree confirmed .the situation would now appear to be that in the first stages of the formation of the replication loops topoi acts favorably to allow their formation and amalgamation .then topoii has a smaller job of finishing the separation of the newly formed duplexes . in figure 1we illustrate the schema of this process . in this figurewe indicate the action of the topoi by showing a strand being switched in between two replication loops .the action of topo ii is only stated but not shown . in that action , newly created but entangled dna strands would be disentangled .our hypothesis is that this second action is essentially minimized by the rigidity of the ends of the replication loops in vivo . * figure 1 - dna replication * in the course of this research , we started thinking about the diagrammatic logic of dna replication and more generally about the relationship between dna replication , logic and basic issues in the foundations of mathematics and modeling .the purpose of this paper is to explain some of these issues , raise questions and place these questions in the most general context that we can muster at this time .the purpose of this paper is therefore foundational .it will not in its present form affect issues in practical biology , but we hope that it will enable us and the reader to ask fruitful questions and perhaps bring the art of modeling in mathematics and biology forward .to this end we have called the subject matter of this paper biologic " with the intent that this might suggest a quest for the logic of biological systems or a quest for a biological logic " or even the question of the relationship between what we call logic " and our own biology .we have been trained to think of physics as the foundation of biology , but it is possible to realize that indeed biology can also be regarded as a foundation for thought , language , mathematics and even physics . in order to bring this statement over to physics one has to learn to admit that physical measurements are performed by biological organisms either directly or indirectly and that it is through our biological structure that we come to know the world .this foundational view will be elaborated as we proceed in this paper .in logic it is implicit at the syntactical level that copies of signs are freely available . in abstract logic there is no issue about materials available for the production of copies of a sign , nor is there necessarily a formalization of how a sign is to be copied . in the practical realmthere are limitations to resources .a mathematician may need to replenish his supply of paper .a computer has a limitation on its memory store . in biology , there are no signs , but there are entities that we take as signs in our description of the workings of the biological information process . in this category the bases that line the backbone of the dna are signs whose significance lies in their relative placement in the dna . the dna itself could be viewed as a text that one would like to copy . if this were a simple formal system it would be taken for granted that copies of any given text can be made .therefore it is worthwhile making a comparison of the methods of copying or reproduction that occur in logic and in biology . in logicthere is a level beyond the simple copying of symbols that contains a non - trivial description of self - replication .the schema is as follows : there is a universal building machine that can accept a text or description ( the program ) and build what the text describes .we let lowercase denote the description and uppercase denote that which is described . thus with will build in fact , for bookkeeping purposes we also produce an extra copy of the text this is appended to the production as thus , when supplied with a description produces that which describes , with a copy of its description attached .schematically we have the process shown below . self - replication is an immediate consequence of this concept of a universal building machine .let denote the text or program for the universal building machine .apply to its own description . the universal building machine reproduces itself .each copy is a universal building machine with its own description appended .each copy will proceed to reproduce itself in an unending tree of duplications . in practicethis duplication will continue until all available resources are used up , or until someone removes the programs or energy sources from the proliferating machines .it is not necessary to go all the way to a universal building machine to establish replication in a formal system or a cellular automaton ( see the epilogue to this paper for examples . ) . on the other hand ,all these logical devices for replication are based on the hardware / software or object / symbol distinction .it is worth looking at the abstract form of dna replication .dna consists in two strands of base - pairs wound helically around a phosphate backbone .it is customary to call one of these strands the watson " strand and the other the crick " strand .abstractly we can write to symbolize the binding of the two strands into the single dna duplex .replication occurs via the separation of the two strands via polymerase enzyme .this separation occurs locally and propagates .local sectors of separation can amalgamate into larger pieces of separation as well .once the strands are separated , the environment of the cell can provide each with complementary bases to form the base pairs of new duplex dna s .each strand , separated in vivo , finds its complement being built naturally in the environment .this picture ignores the well - known topological difficulties present to the actual separation of the daughter strands .the base pairs are ( adenine and thymine ) and ( guanine and cytosine ) . thus if then symbolically we can oversimplify the whole process as either half of the dna can , with the help of the environment , become a full dna .we can let be a symbol for the process by which the environment supplies the complementary base pairs , to the watson and crick strands .in this oversimplification we have cartooned the environment as though it contained an already - waiting strand to pair with and an already - waiting strand to pair with _ in fact it is the opened strands themselves that command the appearance of their mates .they conjure up their mates from the chemical soup of the environment ._ the environment is an identity element in this algebra of cellular interaction .that is , is always in the background and can be allowed to appear spontaneously in the cleft between watson and crick : this is the formalism of dna replication .compare this method of replication with the movements of the universal building machine supplied with its own blueprint . herewatson and crick ( and ) are each both the machine _ and _ the blueprint for the dna .they are complementary blueprints , each containing the information to reconstitute the whole molecule .they are each machines in the context of the cellular environment , enabling the production of the dna .this coincidence of machine and blueprint , hardware and software is an important difference between classical logical systems and the logical forms that arise in biology .one can look at formal systems involving self - replication that do not make a distinction between symbol and object . in the case of formal systemsthis means that one is working entirely on the symbolic side , quite a different matter from the biology where there is no intrinsic symbolism , only our external descriptions of processes in such terms .an example at the symbolic level is provided by the lambda calculus of church and curry where functions are allowed to take themselves as arguments .this is accomplished by the following axiom .* axiom for a lambda algebra * : let be an algebraic system with one binary operation denoted for elements and of let be an algebraic expression over with one variable then there exists an element of such that for all in an algebra ( not associative ) that satisfies this axiom is a representation of the lambda calculus of church and curry .let be an element of and define then by the axiom we have in such that for any in in particular ( and this is where the function " becomes its own argument ) thus we have shown that for any in , there exists an element in such that every element of has a fixed point . "this conclusion has two effects .it provides a fixed point for the function and it creates the beginning of a recursion in the form the way we arrived at the fixed point was formally the same as the mechanism of the universal building machine .consider that machine : we have left out the repetition of the machine itself .you could look at this as a machine that uses itself up in the process of building applying to its own description we have the self - replication the repetition of in the form on the right hand side of this definition of the builder property is comparable with with its crucial repetition as well . in the fixed point theorem ,the arrow is replaced by an equals sign !repetition is the core of self - replication in classical logic ._ this use of repetition assumes the possibility of a copy at the syntactic level , in order to produce a copy at the symbolic level ._ there is , in this pivot on syntax , a deep relationship with other fundamental issues in logic . in particular this same form of repetitionis in back of the cantor diagonal argument showing that the set of subsets of a set has greater cardinality than the original set , and it is in back of the g theorem on the incompleteness of sufficiently rich formal systems .the pattern is also in back of the production of paradoxes such as the russell paradox of the set of all sets that are not members of themselves .there is not space here to go into all these relationships , but the russell paradox will give a hint of the structure . let ab " be interpreted as b is a member of a ". then can be taken as the definition of a set such that is a member of exactly when it is _ not _ the case that is a member of note the repetition of in the definition substituting for we obtain , which says that is a member of exactly when it is not the case that is a member of this is the russell paradox . from the point of view of the lambda calculus ,we have found a fixed point for negation . where is the repetition in the dna self - replication ?the repetition and the replication are no longer separated .the repetition occurs not syntactically , but directly at the point of replication .note the device of pairing or mirror imaging . calls up the appearance of and calls up the appearance of calls up the appearance of and calls up the appearance of each object calls up the appearance of its _ dual or paired object _ . calls up and calls up the object that replicates is implicitly a repetition in the form of a pairing of object and dual object . replicates via whence the repetition is inherent in the replicand in the sense that the dual of a form is a repetition of that form .we now consider the quantum level . herecopying is not possible .we shall detail this in a subsection . for a quantum process to copy a state, one needs a unitary transformation to perform the job .one can show , as we explain in the last subsection of this section , that this can not be done .there are indirect ways that seem to make a copy , involving a classical communication channel coupled with quantum operators ( so called quantum teleportation ) .the production of such a quantum state constitutes a reproduction of the original state , but in these cases the original state is lost , so teleportation looks more like transportation than copying . with this in mindit is fascinating to contemplate that dna and other molecular configurations are actually modeled in principle as certain complex quantum states . at this stagewe meet the boundary between classical and quantum mechanics where conventional wisdom finds it is most useful to regard the main level of molecular biology as classical .we shall quickly indicate the basic principles of quantum mechanics .the quantum information context encapsulates a concise model of quantum theory : _ the initial state of a quantum process is a vector in a complex vector space observation returns basis elements of with probability _ where with the conjugate transpose of a physical process occurs in steps where is a unitary linear transformation .note that since when is unitary , it follows that probability is preserved in the course of a quantum process .one of the details for any specific quantum problem is the nature of the unitary evolution .this is specified by knowing appropriate information about the classical physics that supports the phenomena .this information is used to choose an appropriate hamiltonian through which the unitary operator is constructed via a correspondence principle that replaces classical variables with appropriate quantum operators .( in the path integral approach one needs a langrangian to construct the action on which the path integral is based . ) one needs to know certain aspects of classical physics to solve any given quantum problem .the classical world is known through our biology . in this sense biologyis the foundation for physics .a key concept in the quantum information viewpoint is the notion of the superposition of states . if a quantum system has two distinct states and then it has infinitely many states of the form where and are complex numbers taken up to a common multiple .states are really " in the projective space associated with there is only one superposition of a single state with itself .dirac introduced the bra -(c)-ket " notation for the inner product of complex vectors .he also separated the parts of the bracket into the _ bra _ and the _ ket _ thus in this interpretation , the ket is identified with the vector , while the bra is regarded as the element dual to in the dual space .the dual element to corresponds to the conjugate transpose of the vector , and the inner product is expressed in conventional language by the matrix product ( which is a scalar since is a column vector ) .having separated the bra and the ket , dirac can write the ket - bra " in conventional notation , the ket - bra is a matrix , not a scalar , and we have the following formula for the square of written entirely in dirac notation we have the standard example is a ket - bra where so that then is a projection matrix , projecting to the subspace of that is spanned by the vector .in fact , for any vector we have if is an orthonormal basis for , and then for any vector we have hence \ , \,\,|a>\ ] ] we have written this sequence of equalities from to to emphasize the role of the identity so that one can write in the quantum context one may wish to consider the probability of starting in state and ending in state the square of the probability for this event is equal to .this can be refined if we have more knowledge .if it is known that one can go from to ( ) and from to and that the intermediate states are a complete set of orthonormal alternatives then we can assume that for each and that this identity now corresponds to the fact that is the sum of the probabilities of an arbitrary state being projected into one of these intermediate states . if there are intermediate states between the intermediate states this formulation can be continued until one is summing over all possible paths from to this becomes the path integral expression for the amplitude we wish to draw attention to the remarkable fact that this formulation of the expansion of intermediate quantum states has exactly the same pattern as our formal summary of dna replication .compare them .the form of dna replication is shown below . herethe environment of possible base pairs is represented by the ket - bra here is the form of intermediate state expansion . we compare and that the unit can be written as a sum over the intermediate states is an expression of how the environment ( in the sense of the space of possibilities ) impinges on the quantum amplitude , just as the expression of the environment as a soup of bases ready to be paired ( a classical space of possibilities ) serves as a description of the biological environment .the symbol indicated the availability of the bases from the environment to form the complementary pairs .the projection operators are the possibilities for interlock of initial and final state through an intermediate possibility . in the quantum mechanicsthe special pairing is not of bases but of a state and a possible intermediate from a basis of states .it is through this common theme of pairing that the conceptual notation of the bras and kets lets us see a correspondence between such separate domains .finally , we note that in quantum mechanics it is not possible to copy a quantum state !this is called the no - cloning theorem of elementary quantum mechanics .here is the proof : * proof of the no cloning theorem . * in order to have a quantum process make a copy of a quantum state we need a unitary mapping where is a complex vector space such that there is a fixed state with the property that for any state ( denotes the tensor product ) let note that is a linear function of thus we have but hence from this it follows that since and are arbitrary complex numbers , this is a contradiction . the proof of the no - cloning theorem depends crucially on the linear superposition of quantum states and the linearity of quantum process . by the time we reach the molecular level and attain the possibility of copying dna molecules we are copying in a quite different sense than the ideal quantum copy that does not exist .the dna and its copy are each quantum states , but they are different quantum states ! that we see the two dna molecules as identical is a function of how we filter our observations of complex and entangled quantum states .nevertheless , the identity of two dna copies is certainly at a deeper level than the identity of the two letters i " in the word identity .the latter is conventional and symbolic .the former is a matter of physics and biochemistry .we now comment on the conceptual underpinning for the notations and logical constructions that we use in this paper. this line of thought will lead to topology and to the formalism for replication discussed in the last section .mathematics is built through distinctions , definitions , acts of language that bring forth logical worlds , arenas in which actions and patterns can take place .as far as we can determine at the present time , mathematics while capable of describing the quantum world , is in its very nature quite classical . or perhaps we make it so .as far as mathematics is concerned , there is no ambiguity in the hidden in the mathematical box shows exactly what is potential to it when it is opened .there is nothing in the box except what is a consequence of its construction . with this in mind , let us look at some mathematical beginnings .take the beginning of set theory .we start with the empty set and we build new sets by the operation of set formation that takes any collection and puts brackets around it : making a single entity from the multiplicity of the parts " that are so collected .the empty set herself is the result of collecting nothing " .the empty set is identical to the act of collecting . at this point of emergencethe empty set is an action not a thing .each subsequent set can be seen as an action of collection , a bringing forth of unity from multiplicity .one declares two sets to be the same if they have the same members . with this prestidigitation of language, the empty set becomes unique and a hierarchy of distinct sets arises as if from nothing . all representatives of the different mathematical cardinalities arise out of the void in the presence of these conventions for collection and identification .we would like to get underneath the formal surface .we would like to see what makes this formal hierarchy tick .will there be an analogy to biology below this play of symbols ? on the one hand it is clear to us that there is actually no way to go below a given mathematical construction .anything that we call more fundamental will be another mathematical construct . nevertheless , the exercise is useful , for it asks us to look closely at how this given formality is made .it asks us to take seriously the parts that are usually taken for granted .we take for granted that the particular form of container used to represent the empty set is irrelevant to the empty set itself . but how can this be ? in order to have a concept of emptiness , one needs to hold the contrast of that which is empty with everything else " .one may object that these images are not part of the formal content of set theory .but they are part of the _ formalism _ of set theory .consider the representation of the empty set : that representation consists in a bracketing that we take to indicate an empty space within the brackets , and an injunction to ignore the complex typographical domains outside the brackets .focus on the brackets themselves .they come in two varieties : the left bracket , and the right bracket , the left bracket indicates a distinction of left and right with the emphasis on the right .the right bracket indicates a distinction between left and right with an emphasis on the left .a left and right bracket taken together become a _ container _ when each is in the domain indicated by the other .thus in the bracket symbol for the empty set , the left bracket , being to the left of the right bracket , is in the left domain that is marked by the right bracket , and the right bracket , being to the right of the left bracket is in the right domain that is marked by the left bracket .the doubly marked domain between them is their content space , the arena of the empty set .the delimiters of the container are each themselves iconic for the process of making a distinction . in the notation of curly brackets, , this is particularly evident .the geometrical form of the curly bracket is a cusp singularity , the simplest form of bifurcation .the relationship of the left and right brackets is that of a form and its mirror image . if there is a given distinction such as left versus right , then the mirror image of that distinction is the one with the opposite emphasisthis is precisely the relationship between the left and right brackets .a form and its mirror image conjoin to make a container .the delimiters of the empty set could be written in the opposite order : this is an _extainer_. the extainer indicates regions external to itself . in this case of symbols on a line , the extainer indicates the entire line to the left and to the right of itself .the extainer is as natural as the container , but does not appear formally in set theory .to our knowledge , its first appearance is in the dirac notation of bras " and kets " where dirac takes an inner product written in the form and breaks it up into and and then makes projection operators by recombining in the opposite order as see the earlier discussion of quantum mechanics in this paper .each left or right bracket in itself makes a distinction .the two brackets are distinct from one another by mirror imaging , which we take to be a notational reflection of a fundamental process ( of distinction ) whereby two forms are identical ( indistinguishable ) except by comparison in the space of an observer .the observer _ is _ the distinction between the mirror images . mirrored pairs of individual brackets interact to form either a _ container _ or an _ extainer _ these new forms combine to make : and two containers interact to form an extainer within container brackets .two extainers interact to form a container between extainer brackets .the pattern of extainer interactions can be regarded as a formal generalization of the bra and ket patterns of the dirac notation that we have used in this paper both for dna replication and for a discussion of quantum mechanics . in the quantum mechanics application corresponds to the inner product , a commuting scalar , while corresponds to , a matrix that does not necessarily commute with vectors or other matrices . with this application in mind ,it is natural to decide to make the container an analog of a scalar quantity and let it commute with individual brackets .we then have the equation by definition there will be no corresponding equation for .we adopt the axiom that containers commute with other elements in this combinatorial algebra .containers and extainers are distinguished by this property .containers appear as autonomous entities and can be moved about .extainers are open to interaction from the outside and are sensitive to their surroundings . at this point , we have described the basis for the formalism used in the earlier parts of this paper .if we interpret e as the environment " then the equation expresses the availability of complementary forms so that becomes the form of dna reproduction .we can also regard as symbolic of the emergence of dna from the chemical substrate .just as the formalism for reproduction ignores the topology , this formalism for emergence ignores the formation of the dna backbone along which are strung the complementary base pairs . in the biological domainwe are aware of levels of ignored structure . in mathematicsit is customary to stop the examination of certain issues in order to create domains with requisite degrees of clarity .we are all aware that the operation of collection is proscribed beyond a certain point .for example , in set theory the russell class of all sets that are not members of themselves is not itself a set .it then follows that the collection whose member is the russell class , is not a class ( since a member of a class is a set ) .this means that the construct is outside of the discourse of standard set theory .this is the limitation of expression at the high end " of the formalism .that the set theory has no language for discussing the structure of its own notation is the limitation of the language at the low end " .mathematical users , in speaking and analyzing the mathematical structure , and as its designers , can speak beyond both the high and low ends . in biologywe perceive the pattern of a formal system , a system that is embedded in a structure whose complexity demands the elucidation of just those aspects of symbols and signs that are commonly ignored in the mathematical context .rightly these issues should be followed to their limits .the curious thing is what peeks through when we just allow a bit of it , then return to normal mathematical discourse . with this in mind, lets look more closely at the algebra of containers and extainers .taking two basic forms of bracketing , an intricate algebra appears from their elementary interactions : [\ ] ] <\ ] ] are the extainers , with corresponding containers : , \,\,\,\,\ , [ > , \,\,\,\,\ , < ] .\ ] ] these form a closed algebraic system with the following multiplications : [\ , ] [ = \ , [ ] f\ ] ] <\ , ] < = \ , < ] h\ ] ] and [ = \ , < ] g\ ] ] < = \, < ] e\ ] ] [\ , >< = \ , [ > h\ ] ] [\ , > [ = \ , [ > f\ ] ] [\ , ]< = \ , [ ] h\ ] ] [ = \ , [ ] g\ ] ] < = \ , [ ] e\ ] ] <\ , > < = \ , < >h\ ] ] <\ , ] [ = \ , < ] f\ ] ] <\ , > [ = \ ,< > f\ ] ] other identities follow from these .for example , [ > < = \ , < ] [ > e.\ ] ] this algebra of extainers and containers is a precursor to the temperley lieb algebra , an algebraic structure that first appeared ( in quite a different way ) in the study of the potts model in statistical mechanics .we shall forgo here details about the temperley lieb algebra itself , and refer the reader to where this point of view is used to create unitary representations of that algebra for the context of quantum computation . herewe see the elemental nature of this algebra , and how it comes about quite naturally once one adopts a formalism that keeps track of the structure of boundaries that underlie the mathematics of set theory .the _ temperley lieb algebra _ is an algebra over a commutative ring with generators and relations where is a chosen element of the ring .these equations give the multiplicative structure of the algebra .the algebra is a free module over the ring with basis the equivalence classes of these products modulo the given relations . to match this pattern with our combinatorial algebralet and let , [ ] while .$ ] the above equations for our combinatorial algebra then match the multiplicative equations of the temperley lieb algebra .the next stage for representing the temperley lieb algebra is a diagrammatic representation that uses two different forms of extainer .the two forms are obtained not by changing the shape of the given extainer , but rather by shifting it relative to a baseline .thus we define diagrammatically and as shown below : in this last equation we have used the topological deformation of the connecting line from top to top to obtain the identity . in its typographical formthe identity requires one to connect corresponding endpoints of the brackets . in figure 2we indicate a smooth picture of the connection situation and the corresponding topological deformation of the lines . we have deliberately shown the derivation in a typographical mode to emphasize its essential difference from the matching pattern that produced [> < = \ , < ] [ > e.\ ] ] by taking the containers and extainers shifted this way , we enter a new and basically topological realm . this elemental relationship with topologyis part of a deeper connection where the temperley lieb algebra is used to construct representations of the artin braid group .this in turn leads to the construction of the well - known jones polynomial invariant of knots and links via the bracket state model .it is not the purpose of this paper to go into the details of those connections , but rather to point to that place in the mathematics where basic structures apply to biology , topology , and logical foundations . * figure 2 - a topological identity * it is worthwhile to point out that the formula for expanding the bracket polynomial can be indicated symbolically in the same fashion that we used to create the temperley lieb algebra via containers and extainers .we will denote a crossing in the link diagram by the letter chi , .the letter itself denotes a crossing where _ the curved line in the letter chi is crossing over the straight segment in the letter_. the barred letter denotes the switch of this crossing where _ the curved line in the letter chi is undercrossing the straight segment in the letter_. in the bracket state model a crossing in a diagram for the knot or link is expanded into two possible states by either smoothing ( reconnecting ) the crossing horizontally , , or vertically . the vertical smoothing can be regarded as the extainer and the horizontal smoothing as an identity operator . in a larger sense , we can regard both smoothings as extainers with different relationships to their environments . in this sensethe crossing is regarded as the superposition of horizontal and vertical extainers .the crossings expand according to the formulas the verification that the bracket is invariant under the second reidemeister move is then seen by verifying that for this one needs that the container has value ( the loop value in the model ) . the significant mathematical move in producing this modelis the notion of the crossing as a superposition of its smoothings .it is useful to use the iconic symbol for the extainer , and to choose another iconic symbol for the identity operator in the algebra . with these choices we have note the use of the extainer identity at this stage the combinatorial algebra of containers and extainers emerges as the background to the topological characteristics of the jones polynomial .the approach in this section derives from ideas in .here is another use for the formalism of bras and kets . consider a molecule that is obtained by folding " a long chain molecule .there is a set of sites on the long chain that are paired to one another to form the folded molecule .the difficult problem in protein folding is the determination of the exact form of the folding given a multiplicity of possible paired sites .here we assume that the pairings are given beforehand , and consider the abstract structure of the folding _ and _ its possible embeddings in three dimensional space ._ let the paired sites on the long chain be designated by labeled bras and kets with the bra appearing before the ket in the chain order ._ thus and would denote such a pair and the sequence could denote the paired sites on the long chain .see figure 3 for a depiction of this chain and its folding . in this formalismwe do not assume any identities about moving containers or extainers , since the exact order of the sites along the chain is of great importance .we say that two chains are _ isomorphic _ if they differ only in their choice of letters . thus and are isomorphic chains .note that each bra ket pair in a chain is decorated with a distinct letter .written in bras and kets a chain has an underlying parenthesis structure that is obtained by removing all vertical bars and all letters . call this for a given chain .thus we have note that in this case we have is a legal parenthesis structure in the usual sense of containment and paired brackets .legality of parentheses is defined inductively : 1 . is legal .if and are legal , then is legal .if is legal , then is legal .these rules define legality of finite parenthetic expresssions . in any legal parenthesis structure, one can deduce directly from that structure which brackets are paired with one another .simple algorithms suffice for this , but we omit the details . in any casea legal parenthesis structure has an intrinsic pairing associated with it , and hence there is an inverse to the mapping .we define for a legal parenthesis structure , to be the result of replacing each pair in x by where denotes a specific letter chosen for that pair , with different pairs receiving different letters .thus note that in the case above , we have that is isomorphic to * figure 3 - secondary structure * a chain is said to be a _secondary folding structure _ if is legal and is isomorphic to the reader may enjoy the exercise of seeing that secondary foldings ( when folded ) form tree - like structures without any loops or knots .this notion of secondary folding structure corresponds to the usage in molecular biology , and it is a nice application of the bra ket formalism .this also shows the very rich combinatorial background in the bras and kets that occurs before the imposition of any combinatorial algebra .here is the simplest non - secondary folding : note that is legal , but that is not isomorphic to is sometimes called a pseudo knot " in the literature of protein folding .figure 4 should make clear this nomenclature .the molecule is folded back on itself in a way that looks a bit knotted . * figure 4 - a tertiary structure - * with these conventions it is convenient to abbreviate a chain by just giving its letter sequence and removing the ( reconstructible ) bras and kets .thus above may be abbreviated by one may wonder whether at least theoretically there are foldings that would necessarily be knotted when embedded in three dimensional space . with open ends , this means that the structure folds into a graph such that there is a knotted arc in the graph for some traverse from one end to the other .such a traverse can go along the chain or skip across the bonds joining the paired sites .the answer to this question is yes , there are folding patterns that can force knottedness .here is an example of such an intrinsically knotted folding . it is easy to see that this string is not a secondary structure . to see that it is intrinsically knotted , we appeal to the conway - gordon theorem that tells us that the complete graph on seven vertices is intrinsically knotted . in closed circular form ( tie the ends of the folded string together ) , the folding that corresponds to the above string retracts to the complete graph on seven vertices .consequently , that folding , however it is embedded , must contain a knot by the conway - gordon theorem .we leave it as an exercise for the reader to draw an embedding corresponding to a folding of this string and to locate the knot ! the question of intrinsically knotted foldings that occur in nature remains to be investigated .some examples from cellular automata clarify many of the issues about replication and the relationship of logic and biology .here is an example due to maturana , uribe and varela .see also for a global treatment of related issues .the ambient space is two dimensional and in it there are molecules " consisting in dots " ( see figure 5 ) .there is a minimum distance between the dots ( one can place them on a discrete lattice in the plane ) . and bonds " can form with a probability of creation and a probability of decay between molecules with minimal spacing .there are two types of molecules : substrate " and catalysts " . the catalysts are not susceptible to bonding , but their presence ( within say three minimal step lengths ) enhances the probability of bonding and decreases the probability of decay .molecules that are not bonded move about the lattice ( one lattice link at a time ) with a probability of motion . in the beginningthere is a randomly placed soup of molecules with a high percentage of substrate and a smaller percentage of catalysts .what will happen in the course of time ? * figure 5 - proto - cells of maturana , uribe and varela * in the course of time the catalysts ( basically separate from one another due to lack of bonding ) become surrounded by circular forms of bonded or partially bonded substrate .a distinction ( in the eyes of the observer ) between inside ( near the catalyst ) and outside ( far from a given catalyst ) has spontaneously arisen through the chemical rules " .each catalyst has become surrounded by a proto - cell .no higher organism has formed here , but there is a hint of the possibility of higher levels of organization arising from a simple set of rules of interaction . _the system is not programmed to make the proto - cells ._ they arise spontaneously in the evolution of the structure over time .one might imagine that in this way , organisms could be induced to arise as the evolutionary behavior of formal systems .there are difficulties , not the least of which is that there are nearly always structures in such systems whose probability of spontaneous emergence is vanishingly small .a good example is given by another automaton john h. conway s game of life " . in life "the cells appear and disappear as marked squares in a rectangular planar grid .a newly marked cell is said to be born " .an unmarked cell is dead " .a cell dies when it goes from the marked to the unmarked state . a marked cell survives if it does not become unmarked in a given time step . according to the rules of life ,an unmarked cell is born if and only if it has three neighbors . a marked cell survives if it has either two or three neighbors .all cells in the lattice are updated in a single time step .the life automaton is one of many automata of this type and indeed it is a fascinating exercise to vary the rules and watch a panoply of different behaviors . for this discussionwe concentrate on some particular features .there is a configuration in life called a glider " .see figure 6 .this illustrates a glider gun " ( discussed below ) that produces a series of gliders going diagonally from left to right down the life lattice .the glider consists in five cells in one of two basic configurations .each of these configurations produces the other ( with a change in orientation ) . after four steps the glider reproduces itself in form , butshifted in space .gliders appear as moving entities in the temporality of the life board .the glider is a complex entity that arises naturally from a small random selection of marked cells on the life board .thus the glider is a naturally occurring entity " just like the proto - cell in the maturana - uribe - varela automaton .but life contains potentially much more complex phenomena .for example , there is the glider gun " ( see figure 6 ) which perpetually creates new gliders .the gun " was invented by a group of researchers at mit in the 1970 s ( the gosper group ) .it is highly unlikely that a gun would appear spontaneously in the life board .of course there is a tiny probability of this , but we would guess that the chances of the appearance of the glider gun by random selection or evolution from a random state is similar to the probability of all the air in the room collecting in one corner .nervertheless , the gun is a natural design based on forms and patterns that do appear spontaneously on small life boards .the glider gun emerged through the coupling of the power of human cognition and the automatic behavior of a mechanized formal system .cognition is in fact an attribute of our biological system at an appropriately high level of organization .but cognition itself looks as improbable as the glider gun !do patterns as complex as cognition or the glider gun arise spontaneously in an appropriate biological context ? * figure 6 - glider gun and gliders * there is a middle ground . if one examines cellular automata of a given type and varies the rule set randomly rather than varying the initial conditions for a given automaton , then a very wide variety of phenomena will present themselves . in the case of molecular biology at the level of the dna there is exactly this possibility of varying the rules in the sense of varying the sequences in the genetic code .so it is possible at this level to produce a wide range of remarkable complex systems .other forms of self - replication are quite revealing . for example , one might point out that a stick can be made to reproduce by breaking it into two pieces. this may seem satisfactory on the first break , but the breaking can not be continued indefinitely . in mathematics on the other hand , we can divide an interval into two intervals and continue this process ad infinitum . for a self - replication to have meaningin the physical or biological realm there must be a genuine repetition of structure from original to copy . atthe very least the interval should grow to twice its size before it divides ( or the parts should have the capacity to grow independently ) .a clever automaton , due to chris langton , takes the initial form of a square in the plane .the rectangle extrudes a edge that grows to one edge length and a little more , turns by ninety degrees , grows one edge length , turns by ninety degrees grows one edge length , turns by ninety degrees and when it grows enough to collide with the original extruded edge , cuts itself off to form a new adjacent square , thereby reproducing itself .this scenario is then repeated as often as possible producing a growing cellular lattice .see figure 7 . * figure 7 - langton s automaton * the replications that happen in automata such as conway s life are all really instances of periodicity of a function under iteration .the gilder is an example where the life game function applied to an initial condition yields where is a rigid motion of the plane . other intriguing examples of this phenomenon occur .for example the initial condition for life shown in figure 8 has the property that where is a rigid motion of the plane and and the residue are disjoint sets of marked squares in the lattice of the game . itself is a small configuration of eight marked squares fitting into a rectangle of size by thus has a probability of of being chosen at random as eight points from points . * figure 8 - condition d with geometric period * should we regard self - replication as simply an instance of periodicity under iteration ? perhaps , but the details are more interesting in a direct view .the glider gun in life is a structure such that further iterations move the disjoint glider away from the gun so that it can continue to operate as an initial condition for in the same way .a closer look shows that the glider is fundamentally composed of two parts and such that is a version of and some residue and such that where is a rectangular block , and is a mirror image of , while where is a small non - rectangular residue .see figure 9 for an illustration showing the parts and ( left and right ) flanked by small blocks that form the ends of the gun .one also finds that this is the internal mechanism by which the glider gun produces the glider . the extra blocks at either end of the glider gun act to absorb the residues that are produced by the iterations .thus the end blocks are catalysts that promote the action of the gun .schematically the glider production goes as follows : whence the last equality symbolizes the fact that the glider is an autonomous entity no longer involved in the structure of and it is interesting that is a spatially and time shifted version of thus and are really copies " of each other in an analogy to the structural relationship of the watson and crick strands of the dna .the remaining part of the analogy is the way the catalytic rectangles at the ends of the glider gun act to keep the residue productions from interfering with the production process .this is analogous to the enzyme action of the topoisomerase in the dna . * figure 9 - p(left ) and q(right ) compose the glider gun * the point about this symbolic or symbiological analysis is that it enables us to take an analytical look at the structure of different replication scenarios for comparison and for insight .we began with the general question : what is the relationship of logic and biology .certain fundamentals , common to both are handled quite differently .these are certain fundamental distinctions : the distinction of symbol and object ( the name and the thing that is named ) .the distinction of a form and a copy of that form . in logicthe symbol and its referent are normally taken to be distinct .this leads to a host of related distinctions such as the distinction between a description or blueprint and the object described by that blueprint .a related distinction is the dichotomy between software and hardware .the software is analogous to a description .hardware can be constructed with the aid of a blueprint or description . but software intermediates between these domains as it is an _ instruction ._ an instruction is not a description of a thing , but a blueprint for a process .software needs hardware in order to become an actual process .hardware needs software as a directive force .although mutually dependent , hardware and software are quite distinct .in logic and computer science the boundary between hardware and software is first met at the machine level with the built - in capabilities of the hardware determining the type of software that can be written for it . even at the level of an individual gate , there is the contrast of the structure of that gate as a design and the implementation of that design that is used in the construction of the gate .the structure of the gate is mathematical .yet there is the physical implementation of these designs , a realm where the decomposition into parts is not easily mutable .natural substances are used , wood , metal , particular compounds , atomic elements and so on .these are subject to chemical or even nuclear analysis and production , but eventually one reaches a place where nature takes over the task of design . in biologyit is the reverse .no human hand has created these designs .the organism stands for itself , and even at the molecular level the codons of the dna are not symbols .they do not stand for something other than themselves .they cooperate in a process of production , but no one wrote their sequence as software .there is no software .there is no distinction between hardware and software in biology . in logica form arises via the syntax and alphabet of a given formal system . that formal system arises via the choices of the mathematicians who create it .they create it through appropriate abstractions .human understanding fuels the operation of a formal system .understanding imaged into programming fuels the machine operation of a mechanical image of that formal system .the fact that both humans and machines can operate a given formal system has lead to much confusion , for they operate it quite differently ._ humans are always on the edge of breaking the rules either through error or inspiration .machines are designed by humans to follow the rules , and are repaired when they do not do so .humans are encouraged to operate through understanding , and to create new formal systems ( in the best of all possible worlds ) ._ here is the ancient polarity of syntax ( for the machine ) and semantics ( for the person ) .the person must mix syntax and semantics to come to understanding .so far , we have only demanded an adherence to syntax from the machines .the movement back and forth between syntax and semantics underlies all attempts to create logical or mathematical form .this is the cognition behind a given formal system .there are those who would like to create cognition on the basis of syntax alone .but the cognition that we all know is a byproduct or an accompaniment to biology .biological cognition comes from a domain where there is at base no distinction between syntax and semantics . to say that there is no distinction between syntax and semantics in biology is not to say that it is pure syntax .syntax is born of the possibility of such a distinction . in biology an energetic chemical and quantum substrategives rise to a syntax " of combinational forms ( dna , rna , the proteins , the cell itself , the organization of cells into the organism ) .these combinational forms give rise to cognition in human organisms .cognition gives rise to the distinction of syntax and semantics .cognition gives rise to the possibility of design , measurement , communication , language , physics and technology . in this paperwe have covered a wide ground of ideas related to the foundations of mathematics and its relationship with biology and with physics .there is much more to explore in these domains .the result of our exploration has been the articulation of a mathematical region that lies in the crack between set theory and its notational foundations .we have articulated the concepts of container and extainer and shown how the formal algebras generated by these forms encompass significant parts of the logic of dna replication , the dirac formalism for quantum mechanics , formalism for protein folding and the temperley lieb algebra at the foundations of topological invariants of knots and links .it is the mathematician s duty to point out formal domains that apply to a multiplicity of contexts . in this casewe suggest that it is just possible that there are deeper connections among these apparently diverse contexts that are only hinted at in the steps taken so far .the common formalism can act as compass and guide for further exploration .e. l. zechiedrich , a. b. khodursky , s. bachellier , r. schneider , d. chen , d. m. j. lilley and n. r. cozzarelli , roles of topoisomerases in maintaining steady - state dna supercoiling in _ escherichia coli _ , _ j. biol. chem . _ 275:8103 - 8113 ( 2000 ) .s. lomonaco jr , a rosetta stone for quantum mechanics with an introduction to quantum computation , in quantum computation : a grand mathematical challenge for the twenty - first century and the millennium , " ams , providence , ri ( 2000 ) ( isbn 0 - 8218 - 2084 - 2 ) . | in this paper we explore the boundary between biology and the study of formal systems ( logic ) . + = # 1by#2(#3 ) |
this is one of my favorite jokes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ three logicians walk into a bar .the waitress asks , `` do you all want beer ? ''+ the first logician answers , `` i do not know . ''+ the second logician answers , `` i do not know . ''+ the third logician answers , `` yes . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this joke reminds me of hat puzzles . in the jokeeach logician knows whether or not s / he wants a beer , but does nt know what the others want to drink . in hat puzzleslogicians know the colors of the hats on others heads , but not the color of their own hats . here is a hat puzzle where the logicians provide the same answers as in the beer joke .three logicians wearing hats walk into a bar . they know that the hats were placed on their heads from the set of hats in figure [ fig : hatset ] .the total number of available red hats was three , and the total number of available blue hats was two . [ cols="^,^,^,^,^ " , ] [ fig : distinctcolors ] it is natural to try to reuse the solution we know for the previous puzzle .let us say that the number of hats is and the number of logicians is .suppose the last person sums the colors modulo and announces his / her own color so that the total sum is zero .the problem is that s / he might announce the color of one of the logicians in front of him / her .suppose the last logician says `` red . '' when it s time for the logician who is actually wearing a red hat to speak , s / he will realize that his / her hat is red , and oops , s / he ca nt repeat this color . if s / he says a different color randomly , s / he will throw off the strategy and mislead everyone in front .is there a way to rescue this strategy so that only a small number of logicians might be mistaken ?it appears that there is .suppose the last logician says `` red , '' and the logician in the red hat says `` blue . ''the blue color was nt announced yet so the logician in the blue hat is most probably way in front of the red - hatted logician .that means that all the logicians between the red - hatted and the blue - hatted logicians will know that something wrong happened .they will not be misled .they will realize that the logician who said `` blue '' said so because s / he were not allowed to say his / her own color . so that color must be red .that means all the logicians in between can still figure out their colors .here is the strategy where not more than three people will name a wrong color : the last person announces the color so that his / her color and the sum of the colors s / he sees modulo is zero .if there is a logician with this hat color , s / he says the hat color of the first person .this way the three people who might be mistaken are the last one , the logician with the hat color announced by the last person and the first person in line .everyone else is guaranteed to be correct .this is quite a good solution , but there is a completely different solution that guarantees that not more than * one * person is mistaken .i invite you to try and solve it yourself .if you ca nt do it , you can find the solution in my paper devoted to this puzzle .convert the above puzzles into jokes ! | this paper serves as the announcement of my program a joke version of the langlands program . in connection with this program , i discuss an old hat puzzle , introduce a new hat puzzle , and offer a puzzle for the reader . |
our everyday way of looking at space - time as a stage for physical events conflicts with the requirement of defining all physical quantities including space and time through precise measurement protocols .this means that we should more properly regard space - time as emerging from events , instead of pre - existing them .the operational definition of space - time is defined by the protocol that sets up the coordinate system .for example , in the einstein protocol light pulses are sent back and forth between different locations : at the place where the signal has been originated , from the arrival time of the reflected signal one infers both the distance and the time of the remote event of signal - reflection .the protocol shows how space - time is indeed a coherent organization of inferences based on a causal structure for events .the clock itself is just a sequence of events a light pulse bouncing between two mirrors .the closest are the mirrors , the more precise is the clock , and the more refined is the coordinate system .the above reasoning shows that ultimately space and time are defined through pure event - counting , precisely counting _ tic - tacs _ of the observer s clock , and we are thus lead to regard space and time as emergent from the topology of the causal network of events .the events of the network do not need to be regarded as actual , but can be just potential , and the fabric of space - time is precisely the network of causal links between them .the idea of deriving space - time from purely causal structures is not new .raphael sorkin started an independent research line in quantum gravity based on this idea more than two decades ago .this was motivated by the potentialities of the approach residing in the natural discreteness of the causal network , which also provides a history - space for a path integral formulation .the possibility of recovering the main features of the space - time manifold topology , differentiable structure , and conformal metric has been shown , starting from discrete sets of points endowed with a causal partial ordering .along these lines , in an operational context , lucien hardy has also formulated a _ causaloid _approach , which considers the possibility of a dynamical treatment of the causal links . in the causal - set approach of sorkin eventsare randomly scattered in order to avoid occurrence of their sparseness in the boosted frames that would lead to violation of lorentz invariance .clearly the randomness of sorkin s events should not be regarded in terms of their location on a background , otherwise we contradict the same idea of space - time emergence .we then need to consider randomnes at the pure topological level , and this means having random causal connections .however , regarding the causal connections as an irreducible description of the physical law , a random topology would then corresponds to having a random physical law at the most microscopic level ( the planck scale ) , and one may argue that a `` random law '' would contradict the same notion of law .instead , the randomnes should results from the law itself , e.g. in a quantum cellular automaton , where randomness comes from quantum nature of the network .the universality of the physical law thus lead us to take the causal network as topologically homogeneous .topological homogeneity has then the added bonus that metric simply emerges from the pure topology by just counting events along the network .it is obvious that the discreteness of the network will lead to violation of lorenz covariance ( and the other space symmetries ) at the planck scale level : however , one must have a theory where covariance is restored in the large scale limit the fermi scale corresponding to counting huge numbers of events . with the above motivations , in this paper we analyze the mechanism of emergence of space - time from the pure homogeneous topology in dimensions .we present a digital version of the lorentz transformations , along with the corresponding digital - analog conversion rule . upon considering the causal connections as exchanges of classical information, we can establish coordinate systems via an einsteinian protocol , leading to a digital version of the lorentz transformations . in a computational analogy first noticed by leslie lamport , the foliation construction can be regarded as the synchronization protocol with a global clock of the calls to independent subroutines ( the causally independent events ) in a parallel distributed computation .the boosts are determined by the relative lengths of the _ tic _ and _ tac _ of the clock , and the lorentz time - dilation corresponds to an increased number of leaves within a clock _ tic - tac _ , whereas space - contraction results from the corresponding decreased density of events per leaf , as first noticed in ref . .we will see that the operational procedure of building up the coordinate system introduces an in - principle indistinguishability between neighboring events , resulting in a network that is coarse - grained , the thickness of the event being a function of the observer s clock .the digital version of the lorentz transformation is an integer relation which differs from the usual analog transformation by a multiplicative real constant corresponding exactly to the event thickness .the composition rule for velocities is independent on such constant , and is the same in both the analog and the digital versions .preliminary results of the present work were already presented in ref . .the illustrated simple classical construction can be extended to space dimension greater than one , but at the price of anisotropy of the maximal speed , due to the weyl - tiling problem .this issue is cured if the causal network is quantum , as e.g. in a quantum cellular automaton , and isotropy is recovered with quantum coherence , corresponding to superposition of causal paths .we will thus argue that in a causal network description of space - time , the quantum nature of the network is crucial .the first problem to address is which specific lattice should be adopted for the causal network . in our conventionthe causal arrow is directed from the bottom to the top of the network .the dimension of the emerging space - time corresponds to the graph - dimension of the network , which is the dimension of the embedding manifold such that all links can be taken as segments of straight line with the same length .we will require that the lattice be pure topology ( namely with all events equivalent ) , corresponding to a locally homogeneous space - time , and with no redundant links .it is then easy to see that in 1 + 1 dimensions there are only three possible lattices : the square , the triangular , and the honeycomb ones .the honeycomb - lattice has two inequivalent types of events ( having one input and two output links and viceversa ) , and the corresponding `` undressed '' topology where each couple of connected inequivalent events are merged into a single event reduces to the square lattice .the triangular - lattice , on the other hand , has redundant causal links ( the middle vertical ones ) .we are thus left with the square - lattice .we always assume the network links as oriented according to the causal arrow . in the square - lattice networkthere are thus two types of link : toward the right and toward the left shortly _ r_-link and _ l_-link .two events are in the same position ( for some boosted reference frame ) if they are connected by a path made with a sequence of _ r_-links followed by a sequence of _ l_-links .when the two sequences contain the same number of links the reference is at rest .a clock is a sequence of causally connected events periodically oscillating between two positions . for an einstein clock the oscillation ( _ tic - tac _ ) is exactly the same couple of sequences of _ l_- and _ _ r-__links identifying events in the same position .the precision of the clock , namely the minimum amount of time that it can measure , is the number of links of a complete _ tic - tac_. the _ tic - tac _ is indivisible , namely the sole _ tic _ ( or _ tac _ ) is not a complete measured time interval , since it involves two different positions . and ( see text ) . from the left to the rightwe have the rest - frame clock , clock corresponding to , and boosted - frames for , , and , respectively , corresponding to digital speed , and , respectively .the case has doubled imprecision , compared to the case .[ f : clocks ] ] in the following we will call _ light signals _ those sequences of events that are connected only by _r_-links or by _ _l-__links , namely making segments at 45 degrees with the horizontal in the network .their `` speed '' is equal to `` one event - per - step '' , and is the maximum speed allowed by the causality of the network , since connecting events along a line making an angle smaller than with the horizontal would require to follow some causal connections in the backward direction from the output to the input. in this way in a homogeneous causal network suffices to guarantee a bound for the speed of information flow . in the followingwe will take the clock _tic - tac _ made with _r_-links followed by l_-links ( see fig , [ f : clocks ] ) .any clock allows to introduce a reference frame which is just a foliation of the network built up using the einstein protocol . from the start of the clock _tic - tac _ a light signal is sent to an event in a different position and then received back at the clock .the intermediate time between the sending and the receiving event is taken as synchronous with the event at the turning point , and the number of _ tic - tacs _divided by two is taken as the distance from the turning point and the clock conventionally located at the beginning of the _ tic - tac_. in this way we build the foliation corresponding to a given clock .a set of synchronous events identifies a leaf of the foliation . in fig .[ f : clocks2 ] the einstein protocol is illustrated in two particular reference frames .the figure on the left corresponds to the rest - frame , with the blue lines depicting the coordinate system established using the clock with ( see fig .[ f : clocks ] ) .the green lines represent light signals bouncing between the clock and four particular events in the network .these events are synchronous , since the intermediate time between the sending and the receiving event on the clock is the same for all of them .they lie on the same leaf of the foliation , but at different position , , respectively : the spatial coordinate is obtained by counting the _ tic - tacs _ between the the sending and the receiving event divided by two .the right figure represents a boosted frame for , built up using the same protocol as in the left figure .( see fig .[ f : clocks ] ) .the green lines represent light signals bouncing between the clock and four particular events in the network .these events are synchronous , since the intermediate time between the sending and the receiving event on the clock is the same for all of them .they lie on the same leaf of the foliation , but at different position , , respectively : the spatial coordinate is obtained by counting the _ tic - tacs _ between the the sending and the receiving event ( ,respectively ) divided by two .* right figure : * boosted reference frame ( blue lines ) for , built up using the same protocol as in the felt figure .[ f : clocks2],title="fig : " ] ( see fig . [f : clocks ] ) .the green lines represent light signals bouncing between the clock and four particular events in the network .these events are synchronous , since the intermediate time between the sending and the receiving event on the clock is the same for all of them .they lie on the same leaf of the foliation , but at different position , , respectively : the spatial coordinate is obtained by counting the _ tic - tacs _ between the the sending and the receiving event ( ,respectively ) divided by two .* right figure : * boosted reference frame ( blue lines ) for , built up using the same protocol as in the felt figure .[ f : clocks2],title="fig : " ] due to indivisibility of the _ tic - tac _ , we see that there are indiscernible events , for which the synchronisation occurs in the middle of the _ tic - tac_. we are thus led to identify events , and merge them into thicker coarse - grained events .this is done as follows .we identify the events along the _ tic _ and those along the _ tac _ so that the _ tic - tac _ is always regarded as the bouncing between two next neighbour events . then we merge events into minimal sets so that the topology is left the invariant ( see figures [ f : foliation11 ] and [ f : foliation_coarse ] ) .we can distinguish between two different kinds of coarse - graining : one due to the boosting ( in yellow in the figures ) , and one due to intrinsic imprecision of the clock ( in gray ) .the difference between the two is clarified in fig .[ f : foliation_coarse ] . in the top figure events along the _ tic _ and events along the _ tac _ are identified in the boosted frame .then events are merged into minimal sets ( in yellow ) so that the topology is left invariant ( the merged events are again events of a square - lattice network ) . in the central figure the coarse graining associated to the intrinsic imprecisionis added in gray , and finally , in the bottom figure the circuit is stretched so to have all synchronous events on horizontal lines , and events located in the same position on vertical lines .notice that in the special case of the rest - frame , see fig .[ f : foliation11 ] , the coarse - graining is just due to the intrinsic imprecision of the clock . ] , title="fig:"].1 cm , title="fig : " ] .1 cm , title="fig : " ]the velocity of the boosted frame can be easily written in terms of the and of the _ tic - tac _ of the clock , by simply evaluating the ratio of the distances in space and time between the two ending points of the _ tic - tac _ , namely thanks to invariance of topology under the boost coarse - graining , the above identity holds also for the motion relative to any boosted frame , whence , upon defining with ( denoting rational numbers ) and for frames and , and by the relative velocity of frame with respect to frame , one has now , by using the trivial identities and one has which by simple algebraic manipulations immediatley gives the last identity is the composition rule of parallel velocities ( the only possibility in dimension ) in special relativity . ) and ( [ e : lorentztrans2 ] ) , leading to the digital version of the lorentz transformations ( [ ttrans1 ] ) and ( [ ttrans2 ] ) . *left figure : * the reference frame with is represented by the tyny network in black , whereas the coarser network in blue represents the boosted reference frame with , . according to eq .( [ e : speed12 ] ) the relative velocity of with respect to is . in order to connect the coordinate systems in the two frameswe have chosen the same origin on both and .the generic event on has coordinates and in the two frames , respectively . *right figure : * a spatial step in corresponds to space and time steps in . in the same way to a time step in corresponds to space and time steps in .this correspondence allows to determine the coordinates of a given event in the frame in terms of its coordinates in the frame .the resulting transformations are in eqs .( [ e : lorentztrans1 ] ) and ( [ e : lorentztrans2 ] ) .[ f : lorentztrans],title="fig : " ] ) and ( [ e : lorentztrans2 ] ) , leading to the digital version of the lorentz transformations ( [ ttrans1 ] ) and ( [ ttrans2 ] ) .* left figure : * the reference frame with is represented by the tyny network in black , whereas the coarser network in blue represents the boosted reference frame with , . according to eq .( [ e : speed12 ] ) the relative velocity of with respect to is . in order to connect the coordinate systems in the two frameswe have chosen the same origin on both and .the generic event on has coordinates and in the two frames , respectively .* right figure : * a spatial step in corresponds to space and time steps in . in the same way to a time step in corresponds to space and time steps in .this correspondence allows to determine the coordinates of a given event in the frame in terms of its coordinates in the frame .the resulting transformations are in eqs .( [ e : lorentztrans1 ] ) and ( [ e : lorentztrans2 ] ) .[ f : lorentztrans],title="fig : " ] now we use the einstein protocol to construct the boosted coordinated system with respect to the rest - frame along with the relative coordinate systems between any couple of boosted frames .we will now see that the coordinates of an event transform from the frame to the frame as follows in fact , from a simple inspection of figure [ f : lorentztrans ] one can check eqs .( [ e : lorentztrans1 ] ) and ( [ e : lorentztrans2 ] ) with the frame as the rest frame . in the left figurethe reference frame with is represented by the tyny network in black , whereas the coarser network in blue represents the boosted reference frame with , . according to eq .( [ e : speed12 ] ) the relative velocity of with respect to is . in order to connect the coordinate systems in the two frameswe have chosen the same origin on both and .the generic event on has coordinates and in the two frames , respectively . in the figure onthe right one can see that a spatial step in corresponds to space and time steps in .in the same way a time step in corresponds to space and time steps in .this correspondence allows to determine the coordinates of a given event in the frame in terms of its coordinates in the frame .the resulting transformations are in eqs .( [ e : lorentztrans1 ] ) and ( [ e : lorentztrans2 ] ) . invariance of topology with boost , guarantees that they also hold between any couple of boosted frames . by elementary manipulation eqs .( [ e : lorentztrans1 ] ) , ( [ e : lorentztrans2 ] ) can be written in the more customary way upon defining the following constant depending on the clocks of the two frames and using the identity we obtain the _ digital _ lorentz transformations , corresponding to a digital time - dilation by a factor 2 ( analog factor ) and space - contraction by a factor ( compare with the same factors in eqs .( [ digitallorentz])).[f : lorentz ] ] eqs .( [ digitallorentz ] ) differ from the usual _ analog _ lorentz transformations by the multiplicative factor , which is logically required to make the transformations rational , compensating the irrationality of the boost factor .the digital - analog conversion is thus just a rescaling of both space and time coordinates by the factor depending on the boost , which is exactly the square - root of the volume of the coarse - grained event measured as the number of rest - frame events that it contains .such event volume also affects the lorentz space - contraction and time - dilation factor , which in the digital case is given by , whereas in the analog case is rescaled by the ratio of event volumes , leading to .thus , for example , for and corresponding to the digital factor is whereas the analog one is .the digital factor agrees with that of the lorentz time - dilation and space - contraction mechanism of ref . , given in terms of increased density of leaves and corresponding decreased density of events per leaf , as illustrated in fig .[ f : lorentz ] .we have analyzed the mechanism of emergence of space - time from homogeneous topology in dimensions , deriving the digital version of the lorentz transformations along with the corresponding digital - analog conversion rule .the homogeneity of topology physically represents the universality of the physical law ( it is worth mentioning that such law is stripped of the conventionality of space and time homogeneity : see e.g. ref .we have built the digital coordinate system using the einstein s protocol , with signals sent back - and - forth to events from an observer s clock .we found that the procedure introduces an in - principle indistinguishability between neighbouring events , due to the limited precision of the clock , resulting in a network that is coarse - grained , with the event thickness also depending on the boost .the digital version of the lorentz transformation is an integer relation which differs from the usual analog transformation by a multiplicative real constant corresponding to the event thickness .-dimensional computational network : view of a leaf in the rest frame .information must zig - zag to flow at the maximal speed in diagonal direction .this leads to a slow - down of a factor of the analog speed compared to cubic axis direction . ]the present purely classical cinematical construction does not straightforwardly extend from one dimension to larger dimensions , due to the weyl - tiling issue , namely that continuum geometry can not simply emerge from counting sites on a discrete lattice , since e.g. in a square tiling one counts the same number of tiles along a side and along the diagonal of a square .thus , for example , as shown in fig . [f : pyth ] , in a causal network shaped as a square - lattice the fastest speed would be along the cubic axes , whereas along diagonals information should zig - zag , resulting in a slowdown by a factor ( or even in three dimensions ) . indeed , a general theorem of tobias fritz shows that the polytope of points that can be reached in no more than links in a periodic graph does not approach a circle for large .since the polytope has necessarily distinguished directions , this means that there is no periodic graph for which this velocity set is isotropic .this result represents a no - go theorem for the emergence of an isotropic space from a discrete homogeneous causal network representing a _ classical _ information flow .the situation , however , is completely different if one considers the possibility that information can flow in a superposition of paths , along the network , as in a quantum cellular automaton , corresponding to a homogeneous quantum computational network . in fig .[ f : automaton ] a concrete example of evolution is given for a two - dimensional quantum weyl automaton of the kind of bialynicki - birula on a square - lattice .one can see that the maximum propagation speed is isotropic after just few steps . in a similar way full lorentz covarianceis expected to be restored in the same limit of infinitely many events a kind of thermodynamic limit bringing the automaton to the fermi scale .gmd acknowledges interesting discussions with raphael sorkin , seth lloyd , and tobias fritz . this work has been partially supported by prin 2008 . 15 l. bombelli , j. h. lee , d. meyer , and r. sorkin , phys . rev .lett * 59 * , 521 ( 1987 ) .f. markopoulou , gr - qc/0210086 ( 2002 ) .j. henson , in _ approaches to quantum gravity : towards a new understanding of space and time _d. oriti ( cambridge university press , cambridge uk , 2006 ) ( also gr - qc/0601121 ) .s. surya , theor .comp . sc . * 405 * 188 ( 2008 ) .l. hardy , j. phys .a * 40 * 3081 ( 2007 ) .l. lamport , comm .acm , * 21 * 558 ( 1978 ) .g. m. dariano , in cp1232 _ quantum theory : reconsideration of foundations , 5 _ ed . by a.y. khrennikov , ( aip , melville , new york , 2010 ) , pg .3 ( also arxiv : 1001.1088 ) .g. m. dariano and a. tosini , arxiv:1008.4805 ( 2010 ) d. malament , nos * 11 * 293 ( 1977 ) .h. weyl , _ philosophy of mathematics and natural sciences _ , princeton university t. fritz , _ velocity polytopes of periodic graphs _ , draft ( 2011 ) . press , ( princeton 1949 ). i. bialynicki - birula , phys .d * 49 * 6920 ( 1994 ) . | in this paper we study the emergence of minkowski space - time from a causal network . differently from previous approaches , we require the network to be topologically homogeneous , so that the metric is derived from pure event - counting . emergence from events has an operational motivation in requiring that every physical quantity including space - time be defined through precise measurement procedures . topological homogeneity is a requirement for having space - time metric emergent from the pure topology of causal connections , whereas physically homogeneity corresponds to the universality of the physical law . we analyze in detail the case of dimensions . if we consider the causal connections as an exchange of classical information , we can establish coordinate systems via an einsteinian protocol , and this leads to a digital version of the lorentz transformations . in a computational analogy , the foliation construction can be regarded as the synchronization with a global clock of the calls to independent subroutines ( corresponding to the causally independent events ) in a parallel distributed computation . thus the lorentz time - dilation emerges as an increased density of leaves within a single _ tic - tac _ of a clock , whereas space - contraction results from the corresponding decrease of density of events per leaf . the operational procedure of building up the coordinate system introduces an in - principle indistinguishability between neighboring events , resulting in a network that is coarse - grained , the thickness of the event being a function of the observer s clock . the illustrated simple classical construction can be extended to space dimension greater than one , with the price of anisotropy of the maximal speed , due to the weyl - tiling problem . this issue is cured if the causal network is quantum , as e.g. in a quantum cellular automaton , and isotropy is recovered by quantum coherence via superposition of causal paths . we thus argue that in a causal network description of space - time , the quantum nature of the network is crucial . |
in recent years , multicarrier code - division multiple - access ( mc - cdma ) based on the quasi-/perfect- complementary sequence set ( in abbreviation , qcss / pcss ) has attracted much attention due to its potential to achieve low-/zero- interference multiuser performance , . here, a qcss ( or pcss ) refers to a set of two - dimensional matrices with low ( or zero ) non - trivial auto- and cross- correlation sums . in this paper , a complementary sequenceis also called a complementary matrix , and vice versa . to deploy a qcss ( or pcss ) in an mc - cdma system , every data symbol of a specific useris spread by a complementary matrix by simultaneously sending out all of its row sequences over a number of non - interfering subcarrier channels .because of this , the number of row sequences of a complementary matrix , denoted by , is also called the _ number of channels_. at a matched - filter based receiver , de - spreading operations are performed separately in each subcarrier channel , followed by summing the correlator outputs of all the subcarrier channels to attain a correlation sum which will be used for detection .a pcss may also be called a mutually orthogonal complementary sequence set ( mocss ) , a concept extended from mutually orthogonal golay complementary pairs ( gcps ) . however , a drawback of pcss is its small set size . specifically , the set size ( denoted by ) of pcss is upper bounded by the number of channels , i.e. , .this means that a pcss based mc - cdma system with subcarriers can support at most users only .against such a backdrop , there have been two approaches aiming to provide a larger set size , i.e. , .the first approach is to design zero- or low- correlation zone ( zcz / lcz ) based complementary sequence sets , called zcz - css , or lcz - css .a zcz - css ( lcz - css ) based mc - cdma system is capable of achieving zero- ( low- ) interference performance but requires a closed - control loop to dynamically adjust the timings of all users such that the received signals can be quasi - synchronously aligned within the zcz ( lcz ) .a second approach is to design qcss which has uniformly low correlation sums over all non - trivial time - shifts . as such, qcss can be utilized to achieve low - interference performance with a simpler timing - control system . to the authors best knowledge , the first aperiodic correlation lower bound of qcsswas derived by welch in , which states : where every quasi - complementary sequence is a matrix of order ( thus , every row sequence has length of ) with assumed energy of .the aforementioned set size upper bound of pcss , namely , , can also be obtained from ( [ welch_bound_for_cc ] ) by setting . on the other hand , if , one can show that , meaning that a larger set size can be supported by qcss .recently , a generalized levenshtein bound ( glb ) for qcss has been derived by liu , guan and mow in [ [ liuguanmow14 ] , _ theorem 1 _ ] .the key idea behind the glb ( including the levenshtein bound ) is that the weighted mean square aperiodic correlation of any sequence subset over the complex roots of unity should be equal to or greater than that of the whole set which includes all possible complex roots - of - unity sequences .the levenshtein bound was extended from binary sequences to complex roots - of - unity sequences by bozta .a lower bound for aperiodic lcz sequence sets was derived in by an approach similar to levenshtein s . in its bounding equation ,glb is a function of the simplex " weight vector , the set size , the number of channels , and the row sequence length .a necessary condition ( shown in [ [ liuguanmow14 ] , _ theorem 2 _ ] ) for the glb to be tighter than the welch bound is that , where with although a step - function " weight vector was adopted in [ [ liuguanmow14 ] , ( 34 ) ] , it only leads to a tighter glb for . as a matter of fact, the tightness of glb remains unknown for when is sufficiently large .the main objective of this paper is to optimize and then tighten the glb for _ all _ ( instead of _ some _ ) . for this, we are to find a ( locally ) optimal weight vector which is used in the bounding equation .a similar research problem was raised in for traditional binary sequences ( i.e. , non - qcss with ) .see for more details .the optimization of glb on qcss ( with ) , however , is more challenging because an analytical solution to a non - convex glb ( in terms of weight vector ) for _ all _ possible cases of is in general intractable .we first adopt a frequency - domain optimization approach in section iii - b to minimize the ( non - convex ) fractional quadratic function of glb .this is achieved by properly exploiting the specific structure of the circulant quadratic matrix in the numerator of the fractional quadratic term of glb . following this optimization approach, we find a new weight vector which leads to a tighter glb for _ all _ cases satisfying and , asymptotically ( in ) .our finding shows that the condition of , shown in [ [ liuguanmow14 ] , theorem 2 ] , is not only necessary but also sufficient , as tends to infinity .moreover , in section iii - c , it is proved that the newly found weight vector is a local minimizer to the fractional quadratic function of glb , asymptotically .we then examine in sections iv two weight vectors which were presented in for the tightening of the levenshtein bound on conventional single - channel ( i.e. , ) sequence sets .we extend their tightening capability to glb on multi - channel ( i.e. , ) qcss , although the proof is not straightforward .it is shown that each of these two weight vectors gives rise to a tighter glb ( over the welch bound ) for several small values of provided that .it is also noted that the glb from the newly found weight vector is ( in general ) tighter than the glbs from these two ( earlier found ) weight vectors , as shown by some numerical results .in this section , we first present some necessary notations and define qcss . then, we give a brief review of glb .for two complex - valued sequences ] , their aperiodic correlation function at time - shift is defined as when , is called the aperiodic cross - correlation function ( accf ) ; otherwise , it is called the aperiodic auto - correlation function ( aacf ) . for simplicity, the aacf of is denoted by .let be a set of matrices , each of order ( where ) , i.e. , {m\times n } = \left [ \begin{matrix } c^\nu_{0,0 } & c^\nu_{0,1 } & \cdots & c^\nu_{0,n-1}\\ c^\nu_{1,0 } & c^\nu_{1,1 } & \cdots & c^\nu_{1,n-1}\\ \vdots & \vdots & \ddots & \vdots\\ c^\nu_{m-1,0 } & c^\nu_{m-1,1 } & \cdots & c^\nu_{m-1,n-1}\\ \end{matrix } \right ] , \end{split}\ ] ] where .define the aperiodic correlation sum " of matrices and as follows , also , define the aperiodic auto - correlation tolerance and the aperiodic cross - correlation tolerance of as respectively .moreover , define the aperiodic tolerance ( also called the maximum aperiodic correlation magnitude " ) of as .when , is called a _ perfect complementary sequence set _ ( pcss ) ; otherwise , it is called a _ quasi - complementary sequence set _ ( qcss ) . in particular ,when and , a pcss reduces to a matrix consisting of two row sequences which have zero out - of - phase aperiodic autocorrelation sums .such matrices are called golay complementary matrices ( gcms ) or golay complementary pairs ( gcps ) in this paper , and either sequence in a gcp is called a golay sequence .note that the transmission of a pcss or a qcss requires a multi - channel system .specifically , every matrix in a pcss ( or a qcss ) needs non - interfering channels for the separate transmission of row sequences .this is different from the traditional single - channel sequences with only .let ^{\text{t}}$ ] be a simplex " weight vector which is constrained by define a quadratic function where is a circulant matrix with all of its diagonal entries equal to , and its off - diagonal entries , where and the glb for qcss over complex roots of unity in is shown below .[ generalized_welch_bound_for_cc ] .\ ] ] a weaker simplified version of ( [ generalized_welch_bound_for_cc - equ ] ) is given below ..\ ] ] setting , the glb reduces to the welch bound for qcss in ( [ welch_bound_for_cc ] ) .[ rmk_nece_cond ] [ [ liuguanmow14 ] , _ theorem 2 _ ] for the glb to be tighter than the corresponding welch bound , it is _ necessary _ that , where is defined in ( [ nece_cond_qcssbd2 ] ) .[ [ liuguanmow14 ] , _ corollary 1 _ ] applying the weight vector with where , to ( [ generalized_welch_bound_for_cc - equ ] ) , we have the lower bound in ( [ zl_corollary_4_equ ] ) is tighter than the welch bound for qcss in ( [ welch_bound_for_cc ] ) if one of the two following conditions is fulfilled : ( 1 ) : , and ( 2 ) : , and .the necessary condition in _ remark [ rmk_nece_cond ] _ implies that for a given , the welch bound for qcss can not be improved if , where is defined in ( [ nece_cond_qcssbd2 ] ) . on the other hand , the weight vector in ( [ leven_lcz_weighting_vector ] )can only lead to a tighter glb for .because of this , the tightness of glb is unknown in the following ambiguous zone . for sufficiently large , the above zone further reduces to by recalling ( [ nece_cond_qcssbd2 _ ] ) .one may visualize this zone in the shaded area of fig .[ fig_glbgap ] for .we are therefore interested in finding a weight vector which is capable of optimizing and tightening the glb for _ all _ ( rather than _ some _ ) . relating this objective to fig .[ fig_glbgap ] , such a weight vector can give us a tighter glb for the largest region right above the red diamond symbols .however , the optimization of glb in ( [ generalized_welch_bound_for_cc - equ ] ) is challenging because its fractional quadratic term ( in terms of ) is indefinite .more specifically , the quadratic term in the numerator is indefinite as some eigenvalues of the corresponding circulant matrix are negative when [ [ liuguanmow14 ] , appendix b ] .it is noted that indefinite quadratic programming ( qp ) is np - hard , even it has one negative eigenvalue only . moreover , checking local optimality of a feasible point in constrained qp is also np - hard .although some optimality conditions for constrained qp have been derived by bomze from the copositivity perspective , the situation becomes more complicated when indefinite fractional quadratic programming ( fqp ) problems are dealt with . according to , glb may be classified as a standard fqp ( stfqp ) as the feasible set is the standard simplex . to the best of the authors knowledge ,preisig pioneered an iterative algorithm for which convergence to a kkt point ( but can not be guaranteed to be a local minimizer ) of the stfqp can be proved .two algorithms for stfqp based on semidefinite programming ( sdp ) relaxations are presented in , yet the optimalities of the resultant solutions are unknown . as a matter of fact , the algorithms developed in may only be feasible for medium - scaled stfqp with .in contrast , we target at an analytical solution ( as opposed to a numerical solution ) which is applicable to large scale of glb ( e.g. , the sequence length ) .thus , the techniques used in may not be useful for the specific stfqp problem considered in this paper . in the sequel, we introduce a frequency - domain optimization approach which finds a local minimizer ( i.e. , a weight vector ) of the glb .we show that the obtained weight vector leads to a tighter glb for _ all _ and , asymptotically . to tighten the glb in ( [ generalized_welch_bound_for_cc - equ ] ), we adopt a novel optimization approach in this subsection , motivated by the observation that any circulant matrix [ e.g. , in ( [ leven - quad - fun - equ ] ) which forms a part of the glb quadratic function in ( [ generalized_welch_bound_for_cc - equ ] ) ] can be decomposed in the frequency domain .define and the -point discrete fourier transform ( dft ) matrix as {m , n=0}^{l-1},~\text{where}~f_{m , n}=\xi^{mn}_l.\ ] ] denote by the first column vector of in ( [ leven - quad - fun - equ ] ) , i.e. , ^{\text{t}}.\ ] ] let ^{\text{t}}.\ ] ] it is noted that . by , the circulant matrix defined in ( [ leven - quad - fun - equ ] ) can be expressed as where ^{\text{t}},\ ] ] and is the matrix with being the diagonal vector and zero for all the non - diagonal matrix entries .consequently [ [ nyy2014 ] , theorem 3.1 ] , similarly , by [ [ liuguanmow14 ] , appendix b ] , we have and for .note that for .moreover , we remark that this is because 1 .if is odd : 2 . if is even : to maximize the glb in ( [ generalized_welch_bound_for_cc - equ ] ) , it is equivalent to consider the following optimization problem . since is real - valued , is conjugate symmetric , i.e. , for . having this in mind ,we define taking advantage of the fact that are strictly smaller than other s with nonzero as shown in ( [ lambda_l1_equ ] ) , we have where the equality is achieved if and only if for . inspired by this observation , we relax the non - negativity constraint on , i.e. , some negative s may be allowed ( but the sum of all elements of must still be equal to 1 ) . with this , the optimization problem in ( [ opti_freqd ] ) can be translated to where from now on , we adopt the setting of where denote the magnitude and phase of , respectively . since , we have ^{\text{t}}. \end{split}\ ] ] to optimize the fractional function in ( [ opti_freqd_trans ] ) , we have the following lemma .[ lem_monofun ] the fractional function in terms of in ( [ opti_freqd_trans ] ) is 1 .case 1 : monotonically decreasing in if and ; 2 .case 2 : monotonically increasing in if , or and . to prove case 1 ,we first show that if and only if where is defined in ( [ nece_cond_qcssbd2 ] ) . for ease of analysis , we write where is a positive integer and .thus , .consequently , we have \\ & \leq \frac{n(mn-1)}{k } \left ( 1- \frac{n+1}{n+\epsilon } \right ) \\ & < 0 , \end{split}\ ] ] with which the proof of case 1 follows . the proof of case 2can be easily obtained by following a similar argument . for case 2 of _ lemma [ lem_monofun ]_ , it can be readily shown that the minimum of the fractional function in ( [ opti_freqd_trans ] ) is achieved at .thus , the weight vector in ( [ wtvec3_equ ] ) reduces to ^{\text{t } } , \end{split}\ ] ] where the corresponding glb reduces to the welch bound in ( [ welch_bound_for_cc ] ) .next , let us focus on the application of case 1 for glb tightening . in this case, we wish to know the upper bound of in order to minimize the fractional function of in ( [ opti_freqd_trans ] ) .coming back to the constraint of given in ( [ leven_weight_vector ] ) , and should satisfy thus , where the upper bound is achieved with equality when for any integer . by substituting into ( [ wtvec3_equ ] ), we obtain the following weight vector . where and is any integer .the resultant glb from this weight vector is shown in the following lemma .[ coro_glb_from_wetvec3 ] for and , we have ,\ ] ] where are given in ( [ lambdas ] ) .to analyze the asymptotic tightness of the lower bound in ( [ coro_glb_from_wetvec3-equ ] ) , we note that when is sufficiently large , the second condition in _ lemma [ coro_glb_from_wetvec3 ] _ , i.e. , is true for . to show this, we substitute into ( [ cond_case1 ] ) .after some manipulations , one can see that the inequality in ( [ cond_case1 ] ) holds if and only if carrying on the expression in ( [ k_split_equ ] ) , we require which is guaranteed to hold for sufficiently large because is strictly smaller than 1 by assumption . furthermore , we note that therefore , }=\frac{3m}{4k}+\frac{1}{2}-\frac{1}{\pi^2}.\ ] ] on the other hand , let us rewrite the welch bound expression ( [ welch_bound_for_cc ] ) as with then , with ( [ r1 ] ) and ( [ ch4_r1_largen ] ) , one can show that the lower bound in _lemma [ coro_glb_from_wetvec3 ] _ is asymptotically tighter than the welch bound in ( [ welch_bound_for_cc ] ) if and only if the following equation is satisfied . equivalently , we need to prove that for ( as ) , the following inequality holds . one can readily show that the condition given in ( [ bdfromoptwtvec2 ] ) is true for _ all _ .therefore , we have the following theorem .[ th4optwtvec ] the glb in ( [ coro_glb_from_wetvec3-equ ] ) which arises from the weight vector in ( [ wgtvec3_equ ] ) reduces to ,\ ] ] for sufficiently large .such an asymptotic lower bound is tighter than the welch bound for _ all _ and for _ all _ . in this subsection, we prove the proposed weight vector in ( [ wgtvec3_equ ] ) is a local minimizer of the glb in ( [ generalized_welch_bound_for_cc - equ ] ) under certain condition .we consider the weight vector by setting in ( [ wgtvec3_equ ] ) because other values of will lead to identical value of glb [ cf .( [ glb_quadra_fd ] ) and ( [ glb_quadra_fd _ ] ) ] . note that the frequency domain vector has and for all .our problem in this subsection can be formally cast as follows .define the fractional quadratic function is essentially the fractional quadratic term in ( [ generalized_welch_bound_for_cc - equ ] ) by replacing with . ] as follows . where , is the circulant matrix defined in ( [ leven - quad - fun - equ ] ) which has order and with . when and becomes sufficiently large ,prove that the weight vector in ( [ proposed_wgtvec ] ) is a local minimizer of , i.e. , holds for any feasible perturbation which has sufficiently small norm .to get started , we define it is easy to show that ( [ local_mini_equ ] ) is equivalent to the following inequality . let . since is a real vector , is conjugate symmetric in that for . by taking advantage of ( [ qa_in_freqdomain ] ) , we present the following properties which will be useful in the sequel . by ( [ multi_equ4 ] ) , ( [ multi_equ5 ] ) , ( [ multi_equ7 ] ) and ( [ multi_equ8 ] ) , we have |e_i|^2\right \}. \end{split}\ ] ] by ( [ multi_equ3 ] ) , ( [ multi_equ5 ] ) , ( [ multi_equ6 ] ) and ( [ multi_equ7 ] ) , we have therefore , can be expressed in the form shown in ( [ gamma_equ ] ) . since is a small perturbation , let us assume next , we proceed with the following two cases . 1 .case i : if there exists for .+ since we consider with sufficiently large , it is readily to show that holds for any [ see ( [ lambda_even_equ ] ) and ( [ lambda_odd_equ ] ) ] . by ( [ multi_equ4 ] ) , let us write where .furthermore , write \cdot ( 2n-1)=\lambda_1 a+b,\ ] ] where + [ rmk_on_ab ] since , and approach to + and , respectively , as grows sufficiently large .+ to show ( [ local_mini_equ ] ) [ and ( [ local_mini_equ2 ] ) ] holds , we only need to prove the right - hand term of ( [ local_mini_equ3 ] ) divided by is nonnegative , asymptotically . for this , our idea is to consider a fixed ( sufficiently large ) and prove that : ( 1 ) is lower bounded by a nonnegative value determined by only ; ( 2 ) tends to zero ( with an upper bounded ) regardless the value of . + from ( [ lambda_i_equ ] ) , we have for ease of analysis , let be an even integer is odd , we can prove ( [ local_mini_equ ] ) [ and ( [ local_mini_equ2 ] ) ] holds by almost the same arguments . ] .hence , .since is a decreasing function of , we have also , where by noting ( a small positive angle ) and , we have by ( [ lambda_even_equ ] ) and ( [ lambda_odd_equ ] ) , we obtain on the other hand , \\ \rightarrow & ~ 0^{- } , \end{split}\ ] ] where denotes a sufficiently small value ( negative ) that approaches zero from the left .therefore , we have \cdot ( 2n-1)}{a}\\ = & { \lim\limits_{m\rightarrow+\infty}\frac{\lambda_1}{a } } \cdot \underbrace{\lim\limits_{m\rightarrow+\infty}a}_{\text{upper bounded } } + { \lim\limits_{m\rightarrow+\infty}\frac{b}{a}}%\rightarrow \frac{\xi}{a}.% > \frac{2}{3}\cdot \left ( 2\sum\limits_{i=2}^{n-1}|e_i|^2 \right ) \end{split}\ ] ] by ( [ lim_xi_a_equ ] ) and ( [ lambda1_equ ] ) , we assert that when is sufficiently large , the sign of the limit in ( [ lim_gep_equ ] ) will be identical to that of [ cf .( [ lim_xi_a_equ ] ) ] which is nonnegative .this shows that ( [ local_mini_equ ] ) [ and ( [ local_mini_equ2 ] ) ] holds for case i , asymptotically .2 . case ii : if for all .+ in this case , ( [ ab_equ ] ) reduces to since , we have where denotes the real part of complex data . consider which takes the following form . where and denotes the phase shift of .as a result , can be expressed as thus , since , we assert that for sufficiently large , holds because it will be dominated by the negative .our next task is to show that . by ( [ ei_equ ] ) , we have it is required in ( [ multi_equ2 ] ) that for all , i.e. , setting , we have setting , we have therefore , this shows holds provided . this can be easily satisfied by a sufficiently small .together with ( [ lambda1ab_caseii])-([lambda1ab_caseii_3 ] ) , we conclude that ( [ local_mini_equ ] ) [ and ( [ local_mini_equ2 ] ) ] holds for case ii , asymptotically .this completes the proof of the local optimality of the proposed weight vector in ( [ proposed_wgtvec ] ) . following a proof similar to the above, one can easily show that the weight vector in ( [ proposed_wgtvec ] ) is also a local minimizer of the constrained qp of when and are sufficiently large .in this section , we first consider another two weight vectors and study the tightness of their resultant glbs .then , we compare them with the proposed weight vector in ( [ wgtvec3_equ ] ) by some numerical results . in , liu _et al _ showed that the following positive - cycle - of - sine " weight vector where , asymptotically leads to a tighter levenshtein bound ( i.e. , ) for all . by [ [ liuparaguanbozas14 ] , _ proposition 1 _ ], one can show that the resultant glb from the weight vector in ( [ sine_shape_weight_vector ] ) can be written as follows .[ new_lwerbd_from_new_wv ] , % ~~\text{for}~2\leq m\leq 2n-1,\ ] ] where in what follows , we analyze the asymptotic tightness of the lower bound in ( [ zl - corollary - new - weight - vector - equ ] ) .define . obviously , is a real - valued constant with when is on the same order of ( i.e. , ) ; and when is dominated by asymptotically ( i.e. , ) .furthermore , define the fractional term in ( [ zl - corollary - new - weight - vector - equ ] ) as it is easy to see that the lower bound in ( [ zl - corollary - new - weight - vector - equ ] ) is tighter than the welch bound in ( [ welch_bound_for_cc ] ) if and only if where is defined in ( [ r1 ] ) . as tends to infinity , the inequality in ( [ iff ] ) is equivalent to when , we have and as . in this case , one can show that which can be ignored without missing the minimum point of interest in the right - hand side of ( [ asymp_iff ] ) .hence , we shall assume to be a non - vanishing real - valued constant with , and rewrite ( [ asymp_iff ] ) as here , the order of the limit and minimization operations can be exchanged because as a function of exists , as shown below .next , noting that , we can express ( [ r2 ] ) as where and after some manipulations , by ( [ ch4_r1_largen ] ) , ( [ r2_2a ] ) and ( [ ch4_q_largen ] ) , it follows that ( [ asymp_iff _ ] ) reduces to equivalently , we assert that the asymptotic lower bound in ( [ zl - corollary - new - weight - vector - equ ] ) is tighter than the welch bound if and only if where in fig .[ fig_optimal_r2 ] , and versus over the range of are plotted. it can be obtained from ( [ overlinek ] ) and fig . [ fig_optimal_r2 ]that by ( [ asym_bd_wtvec2_equ ] ) , one can see that the proposed weight vector in ( [ sine_shape_weight_vector ] ) asymptotically leads to a tighter glb for _ all _ if and only if the value of satisfies the following condition [ c.f .( [ nece_cond_qcssbd2_2 ] ) ] in fig .[ fig_dm ] , versus is also plotted . by identifying satisfying [ shown in ( [ ch5_dm_equ ] ) ] , we arrive at the following theorem .[ tighter_m_rmk ] the glb in ( [ zl - corollary - new - weight - vector - equ ] ) which arises from the weight vector in ( [ sine_shape_weight_vector ] ) reduces to ,\ ] ] for sufficiently large , where is given in ( [ ch4_q_largen ] ) . such an asymptotic lower bound is tighter than the welch bound for _ all _ if and only if let us consider the weight vector obtained by minimizing the following function using the lagrange multiplier . where for and .the idea is to optimize the weaker glb in ( [ simplified_glb ] ) . by relating the quadratic minimization solution of to the chebyshev polynomials of the second kind ,one can obtain the weight vector , lemma 2 ] , such a weight vector is more generic as it applies to qcss with different .] below .let and .also , let be an even positive integer with .for , define the following weight vector setting , one can minimize in ( [ mini_f ] ) over different and get a generalized version of the levenshtein bound in [ [ levenshtein99 ] , _ corollary 4 _ ] as follows .[ coro_bd1 ] as , the lower bound in ( [ lev_bd_cor4 ] ) is tighter than the welch bound in ( [ welch_bound_for_cc ] ) if and only if or equivalently , where the right - hand side of ( [ equ_coro2_tighter ] ) is obtained from ( [ lev_bd_cor4 ] ) .recall that as , a necessary condition ( cf ._ remark [ rmk_nece_cond ] _ ) for the glb to be tighter than the corresponding welch bound is clearly , which is smaller than the right - hand side of ( [ equ_coro2_tighter2 ] ) .it can be asserted that the resultant glb obtained from the weight vector in ( [ levwtvec_equ ] ) with is tighter if and only if the value of satisfies the condition > 0.\ ] ] this is because when condition ( [ ch5_dm_equ2 ] ) is satisfied , is not only a necessary condition [ cf .( [ nece_cond_qcssbd2_2 ] ) ] but also a sufficient condition [ cf .( [ equ_coro2_tighter2 ] ) ] for the glb to be asymptotically tighter than the welch bound . in fig .[ fig_dm ] , versus is plotted . by identifying satisfying [shown in ( [ ch5_dm_equ2 ] ) ] , we have the following theorem .[ tighter_m_rmk2 ] the glb in ( [ lev_bd_cor4 ] ) which arises from the weight vector in ( [ levwtvec_equ ] ) is asymptotically tighter than the welch bound for _ all _ if and only if denote by the optimized asymptotic lower bounds in ( [ aym_lwrbd_wec3_equ ] ) , ( [ aym_lwrbd_wec2_equ ] ) , ( [ lev_bd_cor4 ] ) , respectively .we remark that ( 1 ) , both and are greater than for any ; ( 2 ) , except for .the proof is omitted as it can be easily obtained from the tightness analysis in section iii - b and section iv . to further visualize their relative strengths of these three lower bounds ,we calculate in table i the ratio values of with , where and denotes the corresponding welch bound . a ratio value which is larger than 1 corresponds to a tighter glb ( over the welch bound ) . with tablei , one may verify the three sets of for tighter glb in _ theorems 1 - 3 _ as well as the above - mentioned remark in this subsection .in particular , we can see that for all , showing that weight vector 1 is superior than the other two as it is capable of tightening the glb for all possible , asymptotically .[ table_of_small_value ] [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ]the generalized levenshtein bound ( glb ) in [ [ liuguanmow14 ] , _ theorem 1 _ ] is an aperiodic correlation lower bound for quasi - complementary sequence sets ( qcsss ) with _ number of channels _ not less than 2 ( i.e. , ) .although glb was shown to be tighter than the corresponding welch bound [ i.e. , ( [ welch_bound_for_cc ] ) ] for certain cases , there exists an ambiguous zone [ shown in ( [ glbgap _ ] ) and ( [ glbgap ] ) ] in which the tightness of glb over welch bound is unknown . motivated by this, we aim at finding a properly selected weight vector in the bounding equation for a tighter glb for _ all _ ( other than _ some _ ) , where denotes the set size , and is a value depending on and ( the sequence length ) . as the glb is in general a non - convex fractional quadratic function of the weight vector ,the derivation of an analytical solution for a tighter glb for _ all _ possible cases is a challenging task .the most significant finding of this paper is weight vector 1 in ( [ wgtvec3_equ ] ) which is obtained from a frequency - domain optimization approach .we have shown that its resultant glb in ( [ coro_glb_from_wetvec3-equ ] ) is tighter than welch bound for _ all _ and for _ all _ , asymptotically .this finding is interesting as it explicitly shows that the glb tighter condition given in [ [ liuguanmow14 ] , _ theorem 2 _ ] is not only necessary but also sufficient , asymptotically , as shown in _ theorem [ th4optwtvec]_. interestingly , we have proved in section iii - c that weight vector 1 in ( [ wgtvec3_equ ] ) is local minimizer of the glb under certain asymptotic conditions .we have shown that both weight vectors 2 and 3 [ given in ( [ sine_shape_weight_vector ] ) and ( [ levwtvec_equ ] ) , respectively ] lead to tighter glbs for _ all _ but only for certain small values of not less than 2 .note that although they were proposed in , the focus of was on the tightening of levenshtein bound for traditional single - channel ( i.e. , ) sequence sets , whereas in this paper we have extended their tightening capability to glb for multi - channel ( i.e. , ) qcss .furthermore , we have shown in _ theorem [ tighter_m_rmk ] _ and _ theorem [ tighter_m_rmk2 ] _ that weight vector 2 is superior as its admissible set of [ see ( [ distri_optimal_m ] ) ] is larger and subsumes that of weight vector 3 .chen , j .- f .yeh , and n. suehiro , a multicarrier cdma architecture based on orthogonal complementary codes for new generations of wideband wireless communications , " _ ieee commun .126 - 135 , oct . 2001 .z. liu , u. parampalli , y. l. guan , and s. bozta , `` constructions of optimal and near - optimal quasi - complementary sequence sets from singer difference sets , '' _ ieee wireless commun .letters , _ vol .487 - 490 , oct .2013 . h. h. chen , s. w. chu , and m. guizani , on next generation cdma techonogies : the real approach for perfect orthogonal code generation , " _ ieee trans .5 , pp . 2822 - 2833 , sep . 2008 .j. li , a. huang , m. guizani , and h. h. chen , inter - group complementary codes for interference - resistant cdma wireless communications , " _ ieee trans .wireless commun ._ , vol . 7 , no .166 - 174 , jan . 2008 .s. bozta , new lower bounds on aperiodic cross - correlation of codes over roots of unity , " research report 13 , department of mathematics , royal melbourne institudte of technology , australia , 1998 .[ liuparaguanbozas14 ] z. liu , u. parampalli , y. l. guan and s. bozta , a new weight vector for a tighter levenshtein bound on aperiodic correlation , " _ ieee trans .theory , _ vol .2 , pp . 1356 - 1366 , feb . 2014 .j. c. preisig , copositivity and the minimization of quadratic functions with nonnegativity and quadratic equality constraints , " _siam j. control and optimization _ , vol .1135 - 1150 , 1996 . [ nyy2014 ] n. y. yu , a fourier transform approach for improving the levenshteing lower bound on aperiodic correlation of binary sequences , " _ advances in mathematics of communications _ , vol209 - 222 , 2014 . | a quasi - complementary sequence set ( qcss ) refers to a set of two - dimensional matrices with low non - trivial aperiodic auto- and cross- correlation sums . for multicarrier code - division multiple - access applications , the availability of large qcsss with low correlation sums is desirable . the generalized levenshtein bound ( glb ) is a lower bound on the maximum aperiodic correlation sum of qcsss . the bounding expression of glb is a fractional quadratic function of a weight vector and is expressed in terms of three additional parameters associated with qcss : the set size , the number of channels , and the sequence length . it is known that a tighter glb ( compared to the welch bound ) is possible only if the condition and , where is a certain function of and , is satisfied . a challenging research problem is to determine if there exists a weight vector which gives rise to a tighter glb for _ all _ ( not just _ some _ ) and , especially for large , i.e. , the condition is asymptotically both necessary and sufficient . to achieve this , we _ analytically _ optimize the glb which is ( in general ) non - convex as the numerator term is an indefinite quadratic function of the weight vector . our key idea is to apply the frequency domain decomposition of the circulant matrix ( in the numerator term ) to convert the non - convex problem into a convex one . following this optimization approach , we derive a new weight vector meeting the aforementioned objective and prove that it is a local minimizer of the glb under certain conditions . fractional quadratic programming , convex optimization , welch bound , levenshtein bound , perfect complementary sequence set ( pcss ) , quasi - complementary sequence set ( qcss ) , golay complementary pair . |
given a very large dataset of time series ( presumably with some underlying , non - stationary time dependence between each other ) up to time , predict whether a time series will have an anomaly ( defined formally in section [ sec : detecting ] ) at time , moreover when the data stream arrives at time detect anomalies conclusively using a low latency model .[ sec : introduction ] the underlying motivation of this piece is the usefulness of anomaly prediction in mission critical components .a fast anomaly detection platform can by itself be extremely useful for ensuring the reliability of a system , as a result , this problem has been studied extensively by academics and industry experts .we defer to the authors in for a literature survey of anomaly detection techniques . in this work ,we extend the scope of our approach and goal to tackle anomaly prediction .most work in the literature has narrowed their focus on anomaly detection because it is in and of itself a very challenging problem .however , the benefits of an even slight temporal advantage in prediction can have huge impacts on the performance of a system .the contribution of this work can be categorized in two components : * a highly accurate , low - latency anomaly detection system(described in detail in section [ sec : model ] ) where we seek to improve on some of the concepts and techniques introduced in . *an novel approach to anomaly prediction by modeling a bayesian network structured based on the coefficients of the lag regressors in the anomaly detection system , coupled with bayesian parameter learning to model the conditional dependency structure between the time series .our ( non - public ) data set consists of time series , sampled every minute for the past year with some underlying non - stationary dependence between each other and human labeled anomaly events by domain experts ( i.e. , the owner of a time series flagged the time series at time point as an anomaly .although the dataset has some time series source information , our anomaly detection model makes no assumptions about the inter - independence of the time series to allow us to solve the more general problem of having no apriori knowledge . to illustrate the difficulty of the problem we are trying to solve , in figure [ fig : sample ] we show a subsample of 300 points of one of the time series with labels anomalies denoted by red dots .clearly the problem involves latent relationships between time series ; observing one time series in isolation is not enough , even for a human , to determine whether a time point is anomalous .another challenge that we face is that the time series have vastly different probability distributions . to show this, we normalize the time series and approximate the probability distribution of each time series using kernel density estimation ( with a gaussian kernel and hyperparameter search on the bandwidth ) .figure [ fig : kde ] shows the estimated probability densities of the 19 most important time series .a foundational tenet of our model is that our time series have a latent dependence structure between each other . to validate this assumption , for each pair of time series and ( from a selected set of 19 ) , we use kernel density estimation to approximate the marginal distributions of the times series , and , as well as the joint probability distribution .then we compute the mutual information between the approximate distributions : note that mutual information is a measure of the inherent dependence expressed in the joint distribution of and relative to the joint distribution of and under the assumption of independence , in a concrete sense if and only if x and y are independent random variables .figure [ fig : mi ] shows a grid plot of the mutual information between 19 selected time series , the coordinate of the figure denotes . as can be seen from the figure below , the time series are highly dependent between each other , moreover , the dependence structure is not uniform across pairs . although we could use a model like this to try to determine latent dependence structure , this method is not scalable as approximating joint probability distributions can be computationally intensive . in section [ sec : model ]we propose a method that will capture latent dependence structures in a computationally feasible way .combining previous techniques for anomaly detection and time series modeling , we propose an auto regressive distributed lag model on occurs over time rather than all at once . ] with regularization , that is horizontally scalable .we first define the following : * : number of time series * : value of time series at timestamp * : the window size used in the auto regression * : the length of the time series our model is where is the prediction of model at time , is the least squares coefficient vector , and is the model s error on time series at time . in other words , we use all past information from all time series where , for the prediction of the current value in each time series .now consider concatenating all the predictions for time series ( there are predictions ) . for each time series , we can write the regression predictions as varies in the matrix form : where are the vector of predictions and error term , and .x is the matrix of concatenated lag regressors .the model is trained using ordinary least squares method with -regularization . in other words , for each time series , we find the parameter vector of length to minimize ^ 2 + \lambda \|\beta^{(i)}\|_1 \\ & = \|p^{(i ) } - x \beta^{(i)}\|_{2}^{2 } + \lambda \|\beta^{(i)}\|_1 \end{split } \label{eq : loss}\ ] ] here , is the error function and is the hyper - parameter which controls the severity of the penalty on complex models .we do this for each of time series , so we will end up applying ordinary least squares times .this model offers a sparse regression that selects only the time series that have predictive value and identifies the temporal correlation between these time series .the bias and size of the resulting model can be set by modifying the regularization parameter .this is crucial since the smaller the model is , the less coefficients it has , and thus , the less data it needs to make predictions .since retrieving data over a network has latency , and low latency when detecting anomalies is a priority in this algorithm , the size of the data to be retrieved should be as small as possible without greatly affecting the model s accuracy .finally , we store the estimated standard error corresponding to each of the time series to be used later for anomaly detection : here , is the number of non - zero coefficients .this estimator is motivated by the unbiased estimator for the error variance in a simple linear regression without regularization .the performance of its can be seen in reid , tibshirani , and friedman .the model described in section [ sec : model ] makes real time predictions of all time series in the system as it streams new data . in order to detect anomalies , we compared the prediction the model does at current time , , with its real value coming from the data stream , by running a t - test with the t - statistic defined in equation ( [ eq : t_one_example ] ) : this test gives us a p - value we use to compare against a given threshold p - value - threshold , which is a parameter of the model .if the p - value we get with t - statistic ( [ eq : t_one_example ] ) is smaller than p - value - threshold , then we are in presence of an anomaly and an alert should be raised .the parameter p - value - threshold is probably the most important parameter in the model because it directly specifies how sensitive the model is in detecting anomalies .usually its value is very low .a value of around 1e-5 would roughly detect an anomaly every 100,000 minutes .it s important to note that this detection procedure has very low latency because the model is sparse and it only needs to make few calculations in the linear combination .this is a key feature that puts this model ahead of lower latency , but accurate , models like dpca .there is another aspect of real life systems that has not been covered so far . since in real life systems, random and short spikes can be normal and not anomalies , we use a smoothed anomaly definition as opposed to the naive comparison .concretely , since these spikes may result in false positive classification of anomalies instead of testing for an anomaly with just the current data - point , we test for an anomaly with a sample of data - points .this sample of size for time series is defined as \right\}.\ ] ] our model checks the probability that comes from a normal distribution with mean by running a t - test with the t - statistic defined by here , is the sample standard error .as a baseline to highlight the difficulty of the problem we are trying to solve , we implement and test a model that characterizes the time series as multivariate gaussian . more concretely , we build a generative model where time series are generated from an -dimensional multivariate gaussian distribution and then set a threshold of the probability of a time series deviating from its mean .we can visualize this method in figure [ gaussian ] . the gaussian detector . ] again , highlighting the difficulty of the problem that we are trying to solve , we present an oracle where we use the test set as the training set and calculate the test error , " since we know that the test error should be bounded by the training error , this oracle will allow us to understand the difficulty of our problem definition .the results shown in section iii , compare the error of the oracle formulation above .we compare our model s performance with our baseline model ( multivariate gaussian ) and with the state of the art dynamic principal components model ( dpca ) .we run the models on a year worth of server logs data , aggregated as time series . as noted in section [ sec : data ] , the data are labeled by humans ( with domain knowledge ) as anomaly or not anomaly .we use an auto regression window size of one week .the output of each model by minute ( anomaly or not anomaly class ) is compared against the real label of the data .finally , we calculate the score to evaluate the performance of each model ..accuracy [ cols="<,>",options="header " , ] table [ table : latency ] shows that our model is faster than the very accurate dpca .it is also faster than the gaussian model , which is explained by the fact that our model is sparse and does not need to load all the data in the auto - regression window , just the data points that are highly correlated .since low latency is a crucial component in real time applications , we believe our model s combination between good accuracy and extremely low latency is very powerful .the second part of this piece deals with building a bayesian network to characterize the probability of an anomaly . recall that for every time series we run a distributed lag regression ( section [ sec : model ] ) where the regressors are lag variables of and every other time series . the model for time series is parametrized by a vector of coefficients on the regressors ( the regularization term ensures that the majority of these weights are set to zero ) .we denote by the vector of non - zero weights for time series , and by the non - zero weight component for time series that corresponds to lag regressors of time series with a lag of ( as in section [ sec : model ] ) similarly , let correspond to the lag _ variables _ with non - zero weights for time series regressing on lag regressors of time series where the lag for that regressor is . let the length of this vector of variables be .the random variables in our network include : for ; for ; and for , , and .note that although there may be overlap between and for , our bayesian network considers the set of all these variables .the lag variables represent the value of the lagged time series , their domain is continuous .the variables take values representing the presence of an anomaly at time and lastly the variables denote the how much our prediction deviates from the true value of the time series ( see section 6.4 ) .we can visualize the bayes net in figure [ bayesnet ] ./[count= ] in 1,2,3 ( input- ) at ( 0 , 7 - 2 * ) ; /in missing ( input- ) at ( 0 , 7 - 2 * 4 ) ; /in 4 ( input- ) at ( 0 , 7 - 2 * 5 ) ; /[count= ] in 1,2,3 ( hidden1- ) at ( 3 , 7 - 2 * ) ; /in missing ( hidden1- ) at ( 3 , 7 - 2 * 4 ) ; /in 4 ( hidden1- ) at ( 3 , 7 - 2 * 5 ) ; in cont , cont , cont , missing , cont ( hidden2- ) at ( 5 , 7 - 2 ) ; /[count= ] in 1,2,3 ( hidden3- ) at ( 7 , 7 - 2 * ) ; /in missing ( hidden3- ) at ( 7 , 7 - 2 * 4 ) ; /in 4 ( hidden3- ) at ( 7 , 7 - 2 * 5 ) ; in 1 ( output- ) at ( 11 , 7 - 2 ) ; in 1 ( anomaly- ) at ( 13 , 7 - 2 ) ; ( output-1 ) ( anomaly-1 ) ; iin 1, ... ,4 ( hidden3-i ) ( output-1 ) ; iin 1, ... ,4 ( hidden1-i ) to[out=-15,in=-90 ] ( output-1 ) ; iin 1, ... ,4( input - i ) to[out=40,in=120 ] ( output-1 ) ; the structure of the bayesian network is defined by adding , for each , an edge from node to node for , , and an edge from .fan and li show that under the approximation : the unconstrained formulation of the convex optimization problem as equation ( [ eq : loss ] ) can be solved by iterating : where is the estimate of .moreover , fan and li draw from the theory of k - step estimators in which proves that , with a good starting parameter , a on - step iteration can be as efficient as the penalized maximum likelihood estimator , when the newton - rapshon algorithm is used .we have \left(x \beta^{(i ) } + \epsilon^{(i)}\right).\ ] ] thus , there is a direct relation between and .as determines the possibility of anomaly , the derived relation gives rise to the appearance of the node s in the bayesian network . the generative model presented above proved to have many challenges . for concreteness , we highlight some of them .our initial attempt was discretizing the continuous random variables and using maximum likelihood method , an approach which is sometimes used in literature ( p. 186 ) .note however that discretization provides a trade - off between accuracy of the approximation and cost of the computation ( p. 607 ) .work done using this approach proved to be nonviable . for concreteness , note that the average number of parents ( s ) for each time series are . even fora coarse discretization scheme of 10 bins , each local cpd table would have entries , even with the decomposibility of factors in bayesian networks , the amount of data needed to accurately estimate the maximum likelihood ( not to mention the computational complexity of collecting the sufficient statistics ) would be intractable .even if we could estimate the local maximum likelihood , the complexity of many inference algorithms is exponential in the size of the domain of the conditional probability distributions ( e.g. variable elimination ) .one possible approach that we are exploring is tuning the hyperparameter of the cost of regularization term to restrict the number of variables in each time series to be used in the bayesian network .another approach we are exploring is using a hybrid model , which contains both continuous and discrete random variables .this structure in and of itself presents many challenges ( see , e.g. , , chapter 14 ) . in parcticular, we can show that inference on this class of networks is _ np - hard _ , even when the network structure is a polytree ( p. 615 ) . solving the full bayesian network model is super complicated as discussed above .thus , we solve a simplified version of this problem .specifically , we ignore the prior distribution and use the conditional distribution as the posterior distribution .also , the relation between and is linear .the detailed procedure for characterizing anomalies is provided below .we focus on detecting anomalies for one time series only ; other time series in the bayesian network should work in the similar way .* step 1 : we normalize to be in the interval .+ + where is the minimum over all observed values of and is the maximum over those values .the constants and make the normalized fall strictly in .* step 2 : we specify the relation between and s as a linear gaussian model .more precisely , we determine the relation this relation can be rewritten as which can then be solved by least square method .* step 3 : we specify the probability that given that then is characterized as , which is anomaly , if and only if this probability is greater than and otherwise .the constant can be chosen using cross - validation so as to maximize the score . in this sectionwe use four time series as our data set in order to characterize anomalies for one of these four time series .the reason we drastically reduce the size of the data is because matrix ( as defined in equation ( [ eq : loss ] ) ) and limiting the size of our data set was the only way to be able to compute this in a single machine .our data set has the following size : * ( 30 days of minute data ) * ( 5 days of minute data ) * ( four time series ) with this data set we generate a matrix of float elements and dimensions rows by columns .this matrix has a size of roughly 17 gb ( depending on the platform used ) .finally , we obtain and the corresponding score is .this value is certainly not as good as that of dpca or our original discriminative anomaly detection approach , but the comparison is nuanced since above we only processed one time series with a simplified bayesian network ( because of run time complexity ) .notwithstanding , this result is expected ; discriminative models will typically outperform generative models when the relationships expressed by the generative model only approximates the true underlying generative process ( particularly for our case where we follow some approximations to simplify the model complexity ) in terms of classification error rate .however , the advantage of our novel approach is that our model is richer . as a generative model, we can make explicit claims about the process that underlies the time series .future work may included running inference queries on the generative model ( after tuning it for better performance ) to better understand the underlying process where the data is coming from .equation ( [ eq : loss ] ) proposes a loss function composed by an error and an regularization .we think it is worth to test if an error and an regularization performs better .the reason behind this is that an error is more biased against outliers .if our regression is less affected by the anomalies in the training data set , then it will predict anomalies more accurately in the testing data .also , the model proposed in section [ sec : model ] may show false positives when the predictors have an anomaly .this is because after an anomaly exists , it still stays in the data that will be later use as an input in out regression .we did not see this in our experiment , but it is something we would like to test and evaluate . | we develop a supervised machine learning model that detects anomalies in systems in real time . our model processes unbounded streams of data into time series which then form the basis of a low - latency anomaly detection model . moreover , we extend our preliminary goal of just anomaly detection to simultaneous anomaly prediction . we approach this very challenging problem by developing a bayesian network framework that captures the information about the parameters of the lagged regressors calibrated in the first part of our approach and use this structure to learn local conditional probability distributions . |
quantum states formally represent the addressable information content about the system they describe . during their evolution quantum systems may suffer the presence of noise , for instance due to the interaction with another system , generally referred as an external environment .this may cause a loss of information on the system , and leads to a modification from its initial to its final state . in quantum communication theory , stochastic channels , that is completely positive trace preserving ( cpt )mappings , provide a formal description of the noise affecting the system during its evolution .the most detrimental form of noise from the point of view of quantum information , is described by the so - called entanglement breaking ( eb ) maps .these maps when acting on a given system destroy any entanglement that was initially present between the system itself and an arbitrary external ancilla .accordingly they can be simulated as a two stage process where a first party makes a measurement on the input state and sends the outcome , via a classical channel , to a second party who then re - prepares the system of interest in a previously agreed state . for continuous variable quantum systems , like optical or mechanical modes , there is a particular class of cpt maps which is extremely important : the class of gaussian channels .almost every realistic transmission line ( _ e.g. _ optical fibers , free space communication , _ etc ._ ) can be described as a gaussian channel . in this contextthe notion of eb channels has been introduced and characterized in ref .gaussian channels , even if they are not entanglement breaking , usually degrade quantum coherence and tend to decrease the initial entanglement of the state .one may try to apply error correction procedures based on gaussian encoding and decoding operations acting respectively on the input and output states of the map plus possibly some ancillary systems .this however has been shown to be useless , in the sense that gaussian procedures can not augment the entanglement transmitted through the channel ( no - go theorem for gaussian quantum error correction ) . herewe point out that such lack of effectiveness does nt apply when we allow gaussian recovering operations to act _ between _ two successive applications of the same map on the system .specifically our approach is based on the notion of _ amendable channels _ introduced in , whose definition derives from the generalization of the class of eb maps ( gaussian and not ) to the class of eb channels of order .the latter are maps which , even if not necessarily eb , become eb after iterative applications on the system ( in other words , indicating with " the composition of super - operator , is said to be eb of order if is eb while is not ) .we therefore say that a map is amendable if it is eb of order 2 , and there exists a second channel ( called _ filtering _ map ) such that when interposed between the two actions of the initial map , prevents the global one to be eb . in this contextwe show that there exist gaussian eb channels of order which are amendable through the action of a proper gaussian unitary filter ( i.e. whose detrimental action can be stopped by performing an intermediate , recovering gaussian transformation ) .the paper is structured as follows . in sectioni we focus on the formalism of gaussian channels , the characterization of eb gaussian channels and their main properties . in sectionii we explicitly define two types of channels which are amendable via a squeezing operation and a phase shifter respectively . for each channelwe also propose a simple experiment based on finite quantum resources and feasible within current technology .let us briefly set some standard notation .a state of a bosonic system with degrees of freedom is gaussian if its characteristic function ] , where is the symplectic form , and , are the canonical observables for the bosonic system . is vector of the expectation values of , and is the covariance matrix {ij}=\frac{\langle r_i r_j+r_j r_i\rangle_\rho}{2}-\langle r_i\rangle_\rho\langle r_j \rangle_\rho\,.\ ] ] a cpt map is called gaussian if it preserves the gaussian character of the states , and can be conveniently described by the triplet , and , being matrices , which fulfill the condition /2,\quad\ ] ] and act on and as }}= k^\top \mathbf{v}_{\rho } k + \beta\ , \\ { \langle\vec{r}\rangle}_{\rho } & \to & { \langle\vec{r}\rangle}_{\phi[\rho]}= k^\top { \langle\vec{r}\rangle}_\rho + l.\end{aligned}\ ] ] a special subset of gaussian channels is constituted by the unitary gaussian transformations , characterized by having : they include multi - mode squeezing , phase shifts , displacement transformations and products among them .the composition of two gaussian maps , , described by and respectively , is still a gaussian map whose parameters are given by finally , a gaussian map is entanglement - breaking if and only if its matrix can be expressed as with one - mode attenuation channels are special examples of gaussian mappings such that : where , and . this transformation can be described in terms of a coupling between the system and a thermal bosonic bath with mean photon number , mediated by a beam splitter of transmissivity . in ref . the eb properties of the maps under channel iteration were studied as a function of the parameters and . for completenesswe report these findings in fig .[ fig : att ] . in the plotthe solid lines represent the lower boundaries between the regions which identify the set of transformations which are eb of order .they are analytically identified by the inequalities or , in terms of the parameter which gauges the bath average photon number , by notice that for , for all finite , that is if the system is coupled with the vacuum ( zero photons ) the reiterative application of the map , represented by the action of a beam - splitter on the input signal , does not destroy the entanglement between the system and any other ancilla with which it is maximally entangled before the action of the map .it is a well known fact that a map is eb if and only if when applied to one side of a maximally entangled state it produces a separable state .this fact gives an operationally well defined experimental procedure for characterizing the eb property of a channel based on the ability of preparing a maximally entangled state to be used as probing state for the map .unfortunately however , while feasible for finite dimensional systems , in a continuous variable setting this approach is clearly problematic due to the physical impossibility of preparing such an ideal probe state since it would require an infinite amount of energy . quite surprisingly , the following property will avoid this experimental issue . * property ( equivalent test states ) . * _ given an orthonormal set , let be an un - normalized maximally entangled state and a full - rank density matrix .then the ( normalized ) state is a valid resource equivalent to in the sense that a channel is eb if and only if is separable ._ _ proof ._ we already know that is eb if and only if is separable . we need to show that is separable if and only if is separable. this must be true because the two states differ only by a local cp map which can not produce entanglement namely : and .the same property can be extended to continuous variable systems where is not normalizable but it can still be consistently interpreted as a distribution .now , let us consider a two - mode squeezed vacuum ( tmsv ) state with finite squeezing parameter , i.e. where is now the fock basis .it can be expressed in the form of eq .( [ property1 ] ) by choosing is a valid resource for the eb test .the previous property implies that _ it is sufficient to test the action of a channel on a two - mode squeezed state with arbitrary finite entanglement in order to verify if the channel is eb or not ._ surprisingly , even a tiny amount of entanglement is in principle enough for the test .however , because of experimental detection noise and imperfections , a larger value of may be preferable as it allows for a clean - cut discrimination .the previous results are obviously extremely important from an experimental point of view since , for single mode gaussian channels , one can apply the following operational procedure : * prepare a realistic two - mode squeezed vacuum state with a finite value of , * apply the channel to one mode of the entangled state resulting in , * check if the state is entangled or not .probably the experimentally most direct way of witnessing the entanglement of is to apply the so - called product criterion . in this case, entanglement is detected whenever indicate with and , , the position and momentum quadratures associated to each mode of the twin beam . if inequality ( crit ) is satisfied , is entangled and so is not eb .this test , is a witness but it does not provide a conclusive separability proof . for this reason it is useful to compare it with a necessary and sufficient criterion .we will use the logarithmic negativity , which is an entanglement measure quantifying the violation of the separability criterion ppt .let be the covariance matrix of written in the block form the entanglement negativity is a function of the four invariants under local symplectic transformations ,\det [ \mathbf{b}],\det [ \mathbf{c}],\det [ \mathbf{v}_{\rho } ] ] .notice that is the minimum symplectic eigenvalue of the partially transposed state and can be interpreted as an _ optimal product creterion _ since we have that is entangled if and only if eq .( [ crit ] ) is only a sufficient condition .both tests eq . ( [ crit ] ) and ( [ eq : nuquadromis ] ) will be used for assessing , in the next section , the entanglement breaking property of two possible realization of amendable gaussian channels .we note that , direct simultaneous measurements , in a dual - homodyne set - up , on the entangled sub - systems allow a direct evaluation of the product criterion . while , the experimental evaluation of requires the reconstruction of the bipartite system covariance matrix that in many cases can be gained by a single homodyne .in this section we aim to prove the existence of amendable gaussian maps constructing explicit examples and propose experimental setups that would allow one to implement and test them .to do so we will look for gaussian single mode maps and , where is unitary , such that ( notice that the second condition requires that can not be eb ) . under these assumptions, it follows that the channel is an eb map of order 2 which can be amended by the unitary filter . indeed exploiting the fact that local unitary transformation can not alter the entanglement ,the above expressions imply : even though ( [ eq : phiuphieb ] ) , ( [ eq : phiuphieb1 ] ) and ( [ eq : phiuphiu ] ) , ( [ argum ] ) are formally equivalent it turns out that the former relations are easier to be implemented experimentally . for this reason in the following we will focus on such scenario . [ example1 ] here we provide our first example of a channel and of a unitary transformation fulfilling eqs . ( [ eq : phiuphieb ] ) and ( [ eq : phiuphieb1 ] ) .we will consider two mode gaussian maps . by exploiting the property explained in sec .[ sec : choi ] regarding the equivalence of test states , without loss of generality we will apply our channels to twin - beam states with finite squeezing parameter , that is with finite energy , rather then to maximally entangled states which would require an infinite amount of energy to be realized .( eq : phiuphieb ) and ( [ eq : phiuphieb1 ] ) will be implemented by the two setups of fig . [fig : simplifiedsetups ] : * the first one ( setup ) is used to realize the transformation .it consists in an optical squeezer , implementing the unitary , coupled on both sides with a beam - splitter ( one for each side ) of transmissivity . * the second setup ( setup of fig .[ fig : simplifiedsetups ] ) instead is used to realize the transformation : it is obtained from the first by removing the squeezer between the beam splitters .as anticipated we will use states as entangled probes .the aim of the section is to show that by properly choosing the system parameters , the squeezing and the beam - splitter transmissivities , it is possible to realize an amendable gausssian channel .the transformation induced by the beam splitter can be described by an attenuation map with , .on the other hand , we indicate as the unitary map depending on the real parameter , referring to the action of an optical squeezer we set the initial state of the two modes to be a twin - beam , with covariance matrix given by the states at the output of our two setups are described by the following 2-mode density matrices , ] with we stress that and act only on one of the two modes of the incoming twin - beam . .both setups are divided in three stages : a state is prepared , the desired sequence of channels is applied to one mode of the entangled probe , and finally the output state is measured .the beam - splitters implement the attenuation channels of eqs .( [ conc2 ] ) , ( [ conc1 ] ) which represent the transformations of eqs .( [ eq : phiuphieb ] ) , ( [ eq : phiuphieb1 ] ) , while the squeezing transformation implements the unitary . ] the entanglement properties of the two setups can be established by applying the criterion ( [ eq : ebtgauss])-([eq : ebtgauss2 ] ) to .as already recalled , in it was shown that never becomes for any value of the transmissivity .on the contrary , it can be shown that , given by is eb if and only if or equivalently in fig .[ fig : valtilde ] we report plots of vs. and vs. to better visualize the eb regions for the two parameters .it follows then , that for all values of and fulfilling the condition ( [ eq : tildeeta ] ) [ or its equivalent version ( [ eq : rtilda ] ) ] the channel concatenations ( [ conc2 ] ) and ( [ conc1 ] ) provide an instance of the identities ( [ eq : phiuphieb ] ) and ( [ eq : phiuphieb1 ] ) .consequently , following the argument ( [ argum ] ) we can conclude that the map is an example of gaussian channel that is eb of order 2 , and can be amended by the filtering map : for all s .we conclude this section , by introducing an experimental proposal for testing the entanglement - breaking properties of the maps discussed above .a possible procedure is to use in both setups the product criterion given in eq .( [ crit ] ) in order to test the entanglement of the twin - beam after applying and [ i.e. the entanglement of the states and . otherwise ,if we are able to measure the full covariance matrix of the state , we can apply the optimal criterion of eq .( [ eq : nuquadromis ] ) .we will take into account both criterions since the first one could be experimentally simpler while the second one provides a conclusive answer . in our case ,the covariance matrix for is given by where if follows that and in ( [ eq : wgauss ] ) are given by and for what concerns the computation of we get &=&-\frac{1}{4 } \left(2 \gamma(\eta , r , r^{\prime 2}-\alpha(\eta , r , r^{\prime } ) \cosh ( r)\right ) \nonumber \\ & & \times \left(\alpha(\eta ,- r , r^{\prime } ) \cosh ( r)-2 \gamma(\eta ,- r , r^{\prime 2}\right)\ , .\nonumber\end{aligned}\ ] ] as already observed , the state which describes the system at the output of the second configuration can be obtained from by simply setting : therefore , in this same limit the above equations can also be used to determine the corresponding values for the state .the results for both channels are presented in fig .[ fig : r0andr1 ] which shows the values of and as functions of the beam splitter transmittivity .the comparison with the entanglement measure is useful to determine the values of and for which the product criterion provides a reliable entanglement test . in the second setup [ ]we expect the state of the twin - beam to be entangled , since for all s . on the one hand ,as expected we have that is always lower that , the bound being saturated when or ( see fig .[ subfig : r0 ] ) .on the other hand , for we get , and thus we can not distinguish from a separable state if the product criterion is used . we conclude that the product criterion , directly accessible by a dual homodyne set - up , is reliable for . on the contrarythe ppt criterion , requiring the full experimental reconstruction of the state covariance matrix , can be used all the way down to , as shown in fig .[ subfig : r0 ] .if we switch on the optical squeezer [ for ( see eq .( [ eq : rtilda ] ) ) , we will get and the same we expect for , as .equivalently , for any fixed , from eq.([eq : tildeeta ] ) we know that for , as also proved by the behavior of in fig .[ subfig : r1 ] where we have set . on the contrary, is always greater than , and thus our test based on is not conclusive for .this comes from the fact that the product criterion , while being directly accessible by measurements , gives a sufficient but not necessary condition for entanglment . summarizingif we fix the squeezing parameter , in order to get a reliable test by measuring for both setups , the transmissivity of the beamsplitter should be fixed such that under these conditions the witness measurement we have selected allows us to verify that is entangled [ meaning that is not eb ] . at the same timethe state will not pass the entanglement witness criterion in agreement with the fact that is eb .of course this last result can not be used as an experimental _ proof _ that is eb since , to do so , we should first check that no other entanglement witness bound is violated by .notice that this drawback can be avoided if we are able to compute the optimal witness by measuring the full covariance matrix of the output state . finally , let us stress that in the final relation ( eq : reliability ) does not depend on the two - mode squeezing of the incoming twin - beam ( see inset of fig .[ fig : r0andr1 ] ( a ) ) and thus we do not need to test the eb properties of our maps on states characterized by an infinite amount of energy , that is on maximally - entangled states .this represents an important observation , especially from the point of view of the experimental implementation of our scheme .a more detailed analysis of possible experimental losses and detection errors will be addressed in a future work . in the previous sectionwe have seen a class of eb gaussian channels which are amendable through a squeezing filtering transformation .here we focus on channels which are amendable with a different unitary filter : a phase shift .according to the previous notation , the phase shift can be represented with the triplet : where is a phase space rotation of an angle . following the analogy with the previous case we look for a channel , such that the concatenation is eb or not eb , depending on the value of .it is easy to check that can not be an attenuation channel because in this case it would simply commute with the filtering operation .a good candidate is instead the channel , given by where , and .notice that this corresponds to an attenuation channel where the noise affects only the quadrature of the mode .this channel does not commute with a phase shift and , as we are going to show , the composition is eb only for some values of the angle . from the composition law in eq .( [ eq : composition ] ) we have that the total map is given by the entanglement breaking condition given in eq .( [ eq : ebtgauss ] ) , is equivalent to as explained in sec .[ sec : choi ] .this implies that where and are solutions of the equation . they can be explicitly determined : and , where the two solutions make sense only in the cases in which .we may identify this as an _amendability condition_. otherwise , in the cases in which there are no admissible solutions , it means that the channel is constantly eb or not eb independently of the filtering operation .if we want to experimentally test the eb property of the channel as a function of the filtering parameter , we should be able to realize the operations and .a phase shift operation applied to an optical mode can be realized by changing the effective optical path .this is a classical passive operation and it is experimentally very simple .the main difficulty is now the realization of the channel . a possible way to realize is to combine a beam splitter with an additive phase noise channel .this is defined by the triplet and is it essentially a random displacement of the quadrature , where the shift is drawn from a gaussian distribution of variance and mean equal to zero .this could be realized via an electro - optical phase modulator driven with classical electronic noise or by other techniques .it is immediate to check that _ i.e. _ a beam splitter followed by classical phase noise is a possible experimental realization of the channel .the proposed experimental setup is sketched in fig .[ exp2 ] . .as in fig .[ fig : simplifiedsetups ] the setup is divided in three stages ( preparation of the probing state , application of the channels , and finally measurement of the output state ) .the global map is obtained by applying twice the gaussian channel with the intermediate insertion of a unitary phase shifter .depending on the value of the phase shift the global channel is eb or not . ] a two - mode squeezed state is prepared and the desired sequence of channels is applied on one mode of the entangled pair .the presence of entanglement after the application of all the channels is verified by measuring the variances of and defined in ( [ eq : wgauss ] ) after a unitary correction .this correction does not change the entanglement of the state but it is important for optimizing the entanglement criterion ( [ crit ] ) .a possible experiment could be to measure the witness for various choices of the filtering operation , or in other words for various values of .one should check that the condition for entanglement is verified only for some angles while for we must have because the channel is eb . as a figure of merit for the quality of the experiment ,the witness can be compared with the corresponding optimal witness . and optimal theoretical witness as functions of the angle for the setup of fig . [ exp2 ] with parameters : , and . in this casewe find that the global channel is entanglement breaking only in the region where , and . ]the results are plotted in fig .[ plot2 ] . for some values of , one can experimentally show that the channel is not eb . on the other hand , inside the entanglement breaking region ,the witness is consistently larger than .again , we underline that , if we are able to measure the covariance matrix of the output state , the product criterion can be replaced by the optimal one ( see eq .( [ eq : nuquadromis ] ) ) . as a final remarkwe stress that , even though it is realistic to consider to account for experimental losses , the same qualitative results are possible in the limit of , _ i.e. _ without the two beam splitters . in this case the amendability condition ( see eq .( [ c - eta ] ) ) implies and the global map is eb for this paper we proved the existence of amendable gaussian maps by constructing two explicit examples . for each of them we put forward an experimental proposal allowing the implementation of the map .we took as benchmark model the set of entanglement breaking maps , and presented a sort of `` error correction '' technique for gaussian channels . differently from the standard encoding and decoding procedures applied before and after the action of the map , it consists in considering a composite map with and applying a unitary filter between the two actions of the channel so as to prevent the global map from being entanglement breaking .we focused on two - mode gaussian systems .we recall that in order to test the entanglement breaking properties of a map we have to apply it , tensored with the identity , to a maximally - entangled state , which in a continuous variable setting would require an infinite amount of energy . however in sec .sec : choi we have proved that without loss of generality it is sufficient to consider a two - mode squeezed state with finite entanglement . this property is crucial for the experimental feasibility of our schemes .finally , in order to verify if the entanglement of the input state survives after the action our gaussian maps , we applied the product criterion to the out coming modes , and compared it with the entanglement - negativity .the latter analysis enabled us to properly set the intervals to which the experimental parameters have to belong in oder to consider the product criterion reliable .this analysis paves the way to a broad range of future perspectives .one possibility would be to extend it to the case of multimode gaussian or non gaussian maps .another compelling isuee would be determining a complete characterization of amendable gaussian maps of second or higher order .we recall that , according to the definition introduced in , a map is amendable of order , if and it is possible to delay its detrimental effect by steps by applying the same intermediate unitary filter after successive applications of the channel .one possible outlook in this direction would be to allow the choice of different filters at each error correction step and determine an optimization procedure over the filtering maps .of course this analysis would be extremely difficult to be performed for arbitrary noisy maps .a first step would be to focus on set of the gaussian maps using the conservation of the gaussian character under combinations among them and their very simple composition rules to perform this analysis . s. l. braunstein and p. van loock , rev .* 77 * , 513 ( 2005 ) ; c. weedbrook , s. pirandola , r. garcia - patron , n. j. cerf , t. c. ralph , j. h. shapiro and s. lloyd , rev .phys . * 84 * , 621 ( 2012 ) ; a. s. holevo and r. f. werner , phys .a * 63 * 032312 ( 2001 ) ; a. ferraro , s. olivares and m. g. a. paris , _ gaussian states in continuous variable quantum information _ , ( bibliopolis , napoli ) ( 2005 ) ; j. eisert and m. m. wolf , _ gaussian quantum channels in quantum information with continous variables of atoms and light _ , 23 ( imperial college press , london ) ( 2007 ) ; f. caruso , j. eisert , v. giovannetti and a. s. holevo , phys . rev .a * 84 * , 022306 ( 2011 ) ; w. p. bowen , r. schnabel , p. k. lam , and t. c. ralph phys .a * 69 * , 012304 ( 2004 ) ; j. laurat , g. keller , j. a. oliveira - huguenin , c. fabre , t. coudreau , a. serafini , g. adesso , and f. illuminati , j. opt .b : quantum semiclass . opt .* 7 * , s577 ( 2005 ) ; | we show that there exist gaussian channels which are _ amendable_. a channel is amendable if when applied twice is entanglement breaking while there exists a _ unitary filter _ such that , when interposed between the first and second action of the map , prevents the global transformation from being entanglement breaking [ phys . rev . a * 86 * , 052302 ( 2012 ) ] . we find that , depending on the structure of the channel , the unitary filter can be a squeezing transformation or a phase shift operation . we also propose two realistic quantum optics experiments where the amendability of gaussian channels can be verified by exploiting the fact that it is sufficient to test the entanglement breaking properties of two mode gaussian channels on input states with finite energy ( which are not maximally entangled ) . |
with the experimental set [ setup ] , analyze the dependence between current and voltage of the `` black box '' , that must not be opened during the olympiad . herefollow two quality problems , described in section [ quatity_tasks ] , which aim to verify whether the device , hidden in the `` black box '' , works .section [ black_box_investigation ] describes in detail the experiments that must be carried out with the set to analyse vac of the `` black box '' .then in section [ the_world_is_not_perfect ] will make a more detailed analysis and you will see that the physical properties which are found up to now on this subject are valid within certain limits . then follows the static analysis section [ dynamics ] , which will explore the dynamic characteristics of the `` black box '' by mechanical experiments .there is also the purely theoretical section [ theoretical problem ] connected to the theoretical description of the proposed experiments .we suggest the students who are not very confident in the experiment , but better cope with mathematics , to concentrate on the theoretical problem . for the tirelessthere is a homework assignment , described in section [ homework ] .in the set of . [ setup ] , there is one `` black box '' and one led with long cables and two alligator clips .connect the leds to the `` black box '' and if the led does not light , swap the `` alligator clips '' . at least in one of the polarities the led lights .upon further quality problems will have to measure current and voltage in leds and explain how these variables are related to the current - voltage characteristics of the elements .parallel to the `` black box '' and the led connect a resonant circuit .move the led and you will see that the light pulsates now . moreover ,if you include parallel and piezo element , as shown in . [ lcnr ] , you will hear a faint buzz .confirm whether the led flashes and the piezo element buzzes .the following tasks are related to the detailed quantitative analysis of these oscillations and their theoretical explanation . with alligator clips.,width=332 ] in which of the two quality problems the led is brighter constantly or pulsating emission ?_ the subpoints of the section [ black_box_investigation ] , are addressed to younger students .the problems are easier and give less points ._ 1 . * measure the voltage and the current through the `` black box '' , related to the led and without it .( 7 points ) * + connect the circuit in figure [ hypothesis_reject ] and measure the voltage on the `` black box '' and the current , which flows through it .if the led of the first or the last circuit does not light up , change its polarity .the results of the measurements write down in the table , as shown in the exemplary table [ template_4_setups ] . + what causes the small difference between the voltages and ?+ add an amperemeter and a voltmeter .their values are and .( b ) replace the led with wire and the values are and .( c ) we leave the the amperemeter and the value is .( d ) replace the amperemeter and the voltmeter and the value is .( e ) measure the voltage and the current according to the following scheme ., width=612 ] + [ ] + a ) & & + b ) & & + c ) & & + d ) & & + e ) & & + + [ template_4_setups ] 2 .* measure the voltage of the battery .5 v. ( 1 point ) * + use the multimeter as a voltmeter and measure the voltage of the battery .the device displays the sign of the voltage .you can remember the rules for characters and that the black lead is connected to the input of the device marked with `` ground '' ( ) or com and red to one of the other inputs .* measure the resistance of the big white resistor .( 1 point ) * + turn the multimeter as an ohmmeter and measure and note the resistance of the big white resistor with thermal lining of white cement .work with an accuracy of 1 = ?4 . * measure the resistance of the five small resistors .( 3 points ) * + on the connecting wires of the five resistors , that are given to you , stick yellow label .switch the multimeter to operate as an ohmmeter .write the numerical values of the resistances on their labels .arrange them in size and write numbers on the stickers .represent the resistances in a table , work with an accuracy of 1 5 . * using the five small resistors and battery of .5 v measure the relationship between current and voltage of the big white resistor with resistance .( 7 points ) * + an electrical circuit to measure the relationship between current and voltage is shown on fig .[ resistance - measurement ] and fig .[ r_neg_r_with_crocodiles ] .use one multimeter as an amperemeter and plug it consecutively to the white resistor .watch for the signs the current has direction ! the other multimeter turn as a voltmeter , parallel to the white resistor and again watch out for signs and polarity . for the big white resistoracts the ohm s law .if the voltage is positive , the current is positive , otherwise the voltage is negative and the current is negative . if the signs of voltage and current are opposite see where you went wrong in linking of the devices .+ and the ampermeter ( a ) is connected in series and measure the current . when the circuit is closed with different resistances current and voltage are different .thus are obtained in several points of vac . for small voltagesratio is a constant and this is one possible way to verify ohm s law and signs of the measured current and voltage .resistor on the scheme on the left is replaced by the `` black box '' of the figure to the right and this is the only difference in the two circuits . ,width=434 ] + with alligator clips.,width=566 ] + ampermeter connect in series with the battery of 1.5 v. close the circuit consistent with resistances 0 separately the 5 for each measurement note in a table 5 columns and 6 lines in the form given in table [ template ] : 1 ) number of resistors ,2 ) resistance , 3 ) current and 4 ) the voltage of the voltmeter 5 ) the calculated resistance of the measuring element .on the electric circuit shown in figure [ circuit ] , of the circuit with the three resistors and the triangle shown is a voltage amplifier with an amplification that is powered from two batteries with a voltage .the voltages , , as well as the current are unknown. the notation originates from .,width=340 ] the input currents in the nodes ( + ) and ( ) are zero , and the output currect in the node ( 0 ) is such that the corresponding voltages are related , where the gain coefficient has very big value .the triangle sign notes an electric element ( called amplifier ) that is sourced with two batteries with electromotive force the node between the two batteries is connected with wire to the resistor and one of the inputs of the schematics .it is very convinient to choose the potential of this node to be zero .the engineers call this node `` ground '' ( ) .the index originates from the english words common point . on the multimeterthis node is noted as com . in the opposite case if the , then the equation for the voltage amplification becomes .the current that flows through the ground node is equal to zero . calculate the ratio between the input voltage and the current that flows through the entire circuit with accuracy 1% .find the effective resistance as a function of the three resistors in the circuit .for simplicity you can assume that the gain goes to infinity .replace the following example values of the resistors in the found expression . in short : find the ( 1 ) final expression for the resistance of the entire circuit ( 2 ) calculate the value of the resistance with accuracy 1% .what is the sign of the expression and what is the module of its value . how the resistance light on the led ?red and robust flying ._ after the olympiad , find a screwdriver and remove the screws from the cover of the `` black box '' .take out the batteries or turn one of the switches from position `` on '' to `` off '' ._ the `` black box '' vac ( volt - ampere curve ) for small voltages is a straight line with a constant relation with the dimension of resistance , just like the ohm s law .try to measure the resistance of the `` black box '' with an ohmmeter .compare the readings of the ommeter and the determined resistance via an examination of the vac .explain why the determined resistance via the vac could not be measured with an ohmmeter ?what modification has to be done in the `` black box '' circuit shown in fig . [ circuit ] andwhy will it make possible the measurement with an ohmmeter , for instance with the multimeter dt-830b that you were given at the olympiad in the range 20 k ?the first participant that finds the answer of at least one of both questions and sends it from his / her registered for the olympiad email address to ` epo.eu ` by 07:00 on november 1 2015 earns the prize of 137 ; ( 01.11.2015 ) 7:00. , . . , , . , . [ bg_setup ] , `` '' , 2 `` '' . `` '' `` '' . . - - . `` '' . , . , - , . [ bg_lcnr ] , . - . . `` '' . , , - . , width=332 ] , , ( `` '').,width=332 ] - , ?_ [ bg_black_box_investigation ] , - . ( 18 ) - - . , , . _ 1 .* `` '' . ( 7 ) * + . [ bg_hypothesis_reject ] `` '' , . . , . [ bg_template_4_setups ] .+ ?+ . .( b ) .( c ) .( d ) .( e ) . . [ bg_template_4_setups ] ., width=612 ] + [ ] + a ) & & + b ) & & + c ) & & + d ) & & + e ) & & + + [ bg_template_4_setups ] 2 .* .5 v. ( 1 ) * + ( , ) 1.5 v. . , , `` '' ( ) com .* . ( 1 ) * + . 1 = ?* . ( 3 ) * + . . . . , 1 5 .* .5 v .( 7 ) * + . [ bg_resistance - measurement ] . [ bg_r_neg_r_with_crocodiles ] .m , , . ! . . , , , . . + , ( a ) . . . . `` '' . , width=434 ] + , `` ''.,width=566 ] + 1.5 v. 0 5- 5 6 . [ bg_template ] : 1 ) , 2 ) , 3 ) 4 ) 5 ) . , . [ bg_circuit ] , . , , .,width=340 ] [ bg_circuit ] ( + ) ( ) , ( 0 ) , , . ( ) , - . `` '' ( ) . common point com . , , . , `` '' , . 1% ( ) u . , . . : ( 1 ) ( 2 ) , 1% . . ? . ?_ , `` '' . `` on '' `` off '' . _ - ( ) `` '' , . `` '' . , . ? . [ bg_circuit ] `` '' , dt-830b 20 k , ? , , ` epo.eu ` 07:00 1 2015 . 137 ] + 1 & -2.16 & -16.83 + 2 & -2.12 & -14.16 + 3 & -2.09 & -12.31 + 4 & -2.08 & -10.20 + 5 & -2.02 & -7.90 + 6 & -1.99 & -5.80 + 7 & -1.94 & -3.95 + 8 & -1.88 & -1.94 + 9 & -1.63 & -0.01 + 10 & -1.678 & -25.8 + 11 & -1.620 & -9.6 + 12 & -1.492 & -2.2 + 13 & -0.914 & -0.9 + 14 & 0.020 & 0.1 + 15 & 0.132 & 0.2 + 16 & 0.589 & 0.6 + 17 & 1.105 & 1.1 + 18 & 1.743 & 1.7 + 19 & 2.52 & 2.4 + 20 & 3.72 & 3.6 + 21 & 4.24 & 4.1 + 22 & 5.25 & 5.1 + 23 & 6.37 & 6.1 + 24 & 8.03 & 7.7 + 25 & 8.34 & 8.8 + [ bg_experimental_results_red_led ] , `` ''.,width=377 ] .[bg_lcnr ] . . .,width=377 ] o coach-5 . , ; . . . : coach-5 nova ?,width=377 ] . , .,width=377 ] . , width=377 ] `` . '' , . i - v . , i - v . - , , - , . , . , width=377 ] ( . [ bg_circuit ] ) , ( + ) 1% , ( ) ( + ) . ( ) `` '' . . , ( 0 ) : . \frac{u}{r_3 } = -\frac{r_2}{r_1 r_3}u = \frac{u}{r},\ ] ] . . , , . , - - e . ; . . . q- , . , . . , . e e , ; , . . . . k , ( [ bg_criterion_oscillations ] ) . - , w . - e . , . . . , `` '' k . , _google _ _ negative resistance _ _ negative impedance converter(nic)_. . . , ( 1978 . ) ( 1953 . ) ( 1968 . ) . , . , . 9 https://en.wikipedia.org/wiki/negative_impedance_converter ; + https://en.m.wikibooks.org/wiki/circuit_idea/negative_resistance ; + https://en.wikipedia.org/wiki/negative_resistance. + j.c .linvill , `` transistor negative - impedance converters '' , proceedings of the ire 41 ( 6 ) : 725729 ( 1953 ) , doi:10.1109/jrproc.1953.274251 g.j .deboo , `` gyrator type circuit '' , us patent 3493901 , filed 5 march 1968 , issued 1970 - 02 - 03 , assigned to nasa .pippard , _ the physics of vibration . the simple classical vibrator _ ( cambridge university press . , 1978 ) sec .11 , fig . 11.14 . https://en.wikipedia.org/wiki/operational_calculus ; + o heaviside , proc .( london ) * 52 * , 504 - 529 ( 1893 ) , * 54 * , 105 - 143 ( 1894 ) ; + j.r .carson , _ electric circuit theory and the operational calculus _ ( mcgraw - hill , new york , 1926 ) .data sheet ada4817 - 1/ada4817/2 , low noise , 1 ghz fastfet op amps , eq . ( 7 ) and eq .( 4 ) , + http://www.analog.com/media/en/technical-documentation/data-sheets/ada4817-1_4817-2.pdf . k. g. mcconnel and w. r. riley in _handbook on experimental mechanics _, ed . by a.s. kobayashi ( prentice - hall , new jersey , 1987 ) , eqn .( 3.10 ) and eqn . ( 3.12 ) .ecircuit center , op amp model - level 1 , basic op amp model , current source model , http://www.ecircuitcenter.com/circuits/opmodel1/opmodel1.htm p.e .allen and d.r .holberg _ cmos analog circuit design , _ third edition , the oxford series in electrical and computer engineering , ( oxford university press , 2011 ) , chap. 6 , sec . 6.7 macromodels for op amps , figures 6.7 - 4 , 6.7 - 5 , 6.7 - 6 , example 6.7 - 2 , isbn : 9780199765072 , http://www.aicdesign.org/sec.6.7%281=4=02%29.pdf , + https://global.oup.com / academic / product / cmos - analog - circuit - design-9780199765072?cc = bg&lang = en&#. power line chokes , 100 mh , http://www.farnell.com/datasheets/1465589.pdf tl07xx low - noise jfet - input operational amplifiers , fig .9 , http://www.ti.com/lit/ds/symlink/tl071.pdf , . , . , - ( [ bg_exp ] ) , e . . , , . . a ,\\ & & \hat{g}^{-1}=\frac{1}{g_0}+\tau_0\mathrm{d}_t , \quad g({\mathrm{d}_t})\equiv\frac{g_0}{1+\tau_a\mathrm{d}_t}. \quad g_0\equiv \frac{r_a}{r_a } , \quad \tau_a\equiv r_a c_a , \quad \tau_0\equiv r_a c_a=\frac{\tau_a}{g_0}\end{aligned}\ ] ] - ( pad ) , , . spice . tlo71 spice ( [ bg_generalopampeq ] ) - . - , , . ( ) ( opamp ) . ( [ bg_masteropampequation ] ) , . o d. ( [ bg_masteropampequation ] ) , , . , , . . `` [ bg_circuit ] , : '' e = \frac{u}{z},\ ] ] . ( [ bg_generalopampeq ] ) _( negative - impedance converter , nic ) _ , , , . , . `` '' . `` '' , , . _ stability , negative impedance converter _ , , . , ( + ) ( - ) . . . -. . a , , : . ( [ bg_instab ] ) . . , , , ( _ printable circuit board , pcb _ ) . pcb . ( led ) . : , . , o q- 1 . q- , 1 . q- , `` '' . , . : ( [ bg_stabilitycriterion ] ) . . [ bg_circuit ] , ( + ) ( - ) , . . , ( + ) ( - ) , , , . . , : , . k ( [ bg_reff ] ) -e . , , . . `` '' `` . '' ( [ bg_var_epsilon - delta ] ) - . . 2 . ( ) . [ mk_setup ] , ( ) `` '' , . , 5 , , 1 50 cm , - , , 4 ( ) , 9 v , ( ) , . .,width=332 ] [ mk_quatity_tasks ] , `` '' . [ mk_black_box_investigation ] - ( ) `` '' . [ mk_the_world_is_not_perfect ] . [ mk_dynamics ] `` '' . , [ mk_theoreticalproblem ] . 3 , . [ mk_homework ] c 137 . , , _ google _ . , , .pomou datog kompleta eksperimentalnih ureaja ( set ureaja ) prikazanog na slici [ sr_setup ] .ispitajte zavisnost izmeu struje i napona ili kako se jo naziva voltamperska karakteristika ( vak ) `` crne kutije '' , koja se ne sme otvoriti za vreme olimpijade . , druge otpornike , ute etikete , jedan kondenzator sa piezo - ploicom koji igra ulogu minijaturnog zvunika , led diodu povezanu na dugake ice , plastini lenjir , etiri prikjuna kabla za multimetre ( crveni i crni ) , potenciometar povezan na kontakte za baterije od 9 v , paralelno povezane kondenzator i kalem ( predstavljajui rerzonantni krug ) i najvaniji deo jednu crnu kutiju.,width=332 ] slede dva kvalitativna zadadtka opisanih u delu [ sr_quatity_tasks ] , iji je cilj da proverite dali ureaj sakriven u `` crnoj kutiji '' radi .u poglavlju [ sr_black_box_investigation ] su detaljno opisani eksperimenti koje trebate izvriti datim kompletom da bi istraili volt - ampersku karakteristiku ( vak ) `` crne kutije '' .dalje , u poglavlju [ sr_the_world_is_not_perfect ] uradiete detaljnije istraivanje i uvideti da su fizika svojstva ovog objekta koja ste do sad otkrili validna u odreenim granicama .nakon statine analize sledi poglavlje [ sr_dynamics ] u kom ete mehanikim opitima istaivati dinamika svojstva `` crne kutije '' .poglavlje [ sr_theoreticalproblem ] i m a i isto teoretski deo , povezan teoretskim opisom predloenih eksperimenata .uenicima koji nisu mnogo sigurni u eksperiment , ali se bolje nose sa matematikom predlaemo da se skoncentriu na teoretski deo zadatka .za neumorne je domai zadatak opisan u poglavlju [ sr_homework ] , uz novanu premiju od 137 .moete raditi u timu , koristiti literaturu , _ google _ , i da se na internetu konsultujete sa radioinenjerima i univerzitetskim profesorima elektronike svuda po svetu . kadaje pono u kumanovu , u kaliforniji je kasno popodne , u japanu poinje dan svuda na svetu postoje kolege koje rade . | this problem was given at third experimental physics olympiad `` the day of the resistor '' , 31 october 2015 , kumanovo , organized by the regional society of physicists of strumica , macedonia and the sofia branch of the union of physicists in bulgaria . |
[ [ generalities ] ] generalities + + + + + + + + + + + + the present contribution is devoted to the mathematical analysis of the equation equation arises in the modeling of some non - newtonian fluid flows .some details on the modeling will be given below .the variable is one - dimensional , varies on the real line , and models a quantity homogeneous to a stress ( actually to a certain entry of the stress tensor ) .the variable of course denotes the time , and equation is supplied with some initial condition .the unknown real - valued function , solution to , satisfies the two properties : it is nonnegative and normalized to one for all times . the function models the density of probability to have a certain ( elementary microscopic ) stress , at time , at a macroscopic space position . the actual , deterministic stress within the fluid is thus given by , of equation below .equation is thus implicitly parameterized by this position ( thus the multiscale nature of the problem , as will be seen below ) .the function , also a function of time , is assumed given .it models the shear rate at the position , under which we wish to compute , using , the mesoscopic response of the fluid .the notation is traditional in fluid mechanics , hence its use here . in ,we denote by the characteristic function }\ ] ] where a scalar positive parameter , fixed once and for all .it models some local threshold value of the stress , which plays a crucial role in the modeling .as is usual , we denote by the dirac mass at zero . from the definitions of and , it is immediately seenthat , at least formally ( and this will indeed be made rigorous , see lemmata [ lem : max ] and [ lem : mass ] in section [ sec : properties ] below ) , equation preserves in time the two properties and .two quantities are typically computed using the solution to : first the so - called _ fluidity _ and next the ( real - valued ) stress our purpose in this article is to mathematically study equation ( in terms of existence and uniqueness of the solution , properties and long - time behavior of that solution ) and to derive a macroscopic equation equivalent to this equation . by _macroscopic _ equation , we mean an equation ( actually a differential equation , or a system of differential equations ) that directly relates the shear rate , the fluidity and the stress without the explicit need to compute . in these macroscopic equations, the scalar will be the inverse of the mechanical relaxation time , thus its name `` fluidity '' .we will be able , in particular , to obtain a macroscopic equation which is close to models that have been proposed for aging fluids , see the discussion at the end of section [ sec : passage ] .[ [ some - elements - on - the - modeling ] ] some elements on the modeling + + + + + + + + + + + + + + + + + + + + + + + + + + + + + equation is the simplest possible form of an equation describing the mesoscopic behavior of a complex fluid , such as a concentrated suspension , or more generically a soft amorphous material , with properties intermediate between those of a fluid and those of a solid .these materials exhibit a highly non - newtonian behavior and may give rise to a macroscopic yield stress . at low stress , such a material behaves in an elastic way .but above a certain stress threshold , here denoted by the critical value , one observes a relaxation toward a completely relaxed state .this behavior is modeled by equation .the probability of finding the fluid in the state of stress at time evolves in time for two different reasons : the term models the modification of the stress induced by the existence of the shear rate , while the term encodes the relaxation toward zero of the part of the stress above the threshold . from a probabilistic viewpoint ,the stochastic process associated to the fokker - planck equation evolves deterministically when and jumps to zero with an exponential rate when .the process belongs to the class of _ piecewise - deterministic markov processes _ , which have been introduced in the probabilistic literature in the 1980 s for biological modeling for example . in particular, coupling arguments have been proposed to study the longtime behavior of such processes ( see ) .we argue on the fokker - planck equation and proceed differently .the argument we are using here to study the longtime behavior is purely deterministic in nature , and is based on a delay equation related to the fokker - planck equation .we would like to mention that for more realistic models , a third phenomenon is typically at play , in addition to the stress induced by the ambient fluid , and to the relaxation to zero .all states of stress are not independent of one another , and they may also depend on the state of stress at neighboring points within the fluid .a certain redistribution of the stress therefore always occurs .this redistribution can be encoded in various ways , depending on some more detailed elements of modeling . in the so - called hbraud - lequeux model introduced in the seminal article ( andthen extensively studied mathematically in the works ) , the redistribution is performed by a diffusion term in the stress space , at the given location in the ambient physical space , and the complete equation thus writes where is some parameter . in an alternative model introduced by bocquet and coll . in , the redistribution is achieved by some type of local `` convolution '' in the physical space .the equation ( we recall , set at the physical location ) writes with a function related to the green function of some local stokes - type problem .the equation which we study in the present article ignores the redistribution phenomenon , which amounts to taking in or in . in the absence of such a simplification , we are unable to proceed with the main result of this article , which is the derivation of the macroscopic equation from our multiscale model .the well posedness result contained in our theorem [ th : exu ] , on the other hand , also holds for whith and has indeed been established some years ago in . some more detailed comments on the modeling , as well as some formal foundations of the model based on a system of interacting particlesare presented in .[ [ plan - of - our - contribution ] ] plan of our contribution + + + + + + + + + + + + + + + + + + + + + + + + our article is organized as follows . to start with, we study in section [ sec : ss ] the stationary solutions to .we next show in section [ sec : exist - uniq ] existence and uniqueness of the solutions to the time - dependent equation . our result is stated in theorem [ th : exu ] .section [ sec : properties ] follows , establishing some useful properties of the solution . in order to understand the macroscopic equivalent of equation for a given shear rate , which we assume varies slowly as compared to the characteristic time of equation , we need to understand the long - time behavior of the solution to .we therefore study this behavior in sections [ sec : tlc ] and [ sec : long - time ] , respectively in the case of a constant shear rate , and in the case of a slowly varying shear rate .the results are stated in theorems [ th : tlc ] and [ th : tle ] .we are then in position to derive , in section [ sec : passage ] , the macroscopic differential equations equivalent to in this limit , namely system .our final section , section [ sec : numerics ] , presents some numerical experiments which confirm and illustrate our theoretical results .we study in this section the stationary states of .we therefore assume that is a fixed scalar and consider the solutions to the following equation here and in the following , for a subset , denotes the set of distributions on . by convention ,since the time - dependent version of the equation is linear and formally preserves positiveness and the integral over the real line , we are only interested in the stationary solutions that additionally satisfy and , that is , we have the following result : [ lem : ss ] when , the solutions to - are exactly all nonnegative normalized densities with compact support in ] , hence the result .up to a change of into , we may , without loss of generality , consider only the case for our proof .we first note that defined by is a solution to , hence the existence result .we now show uniqueness . by linearity, we assume that is a solution of with and show that .equation implies in ) ] , rewrites differentiating in the sense of distributions in time , we obtain using the definitions of and of . using that is continuous on with , we find on ] and .then , there exists a unique solution verifying in the sense of distribution , and satisfies : for , the result of lemma [ lem : varconst ] is stated in for continuous but holds for .indeed , the existence of a unique solution is still valid in this more general setting ( see ) and expression satisfies almost everywhere .as announced above , we first use lemma [ lem : residue ] to prove the following [ lem : cons ] assume then there exist and which depend only on and and such that for all with moreover , can be chosen as , for any .we immediately emphasize that the point in lemma [ lem : cons ] is to show that the prefactor and the exponent appearing in do not depend on itself , but can be chosen locally uniformly , that is , depend only on the bounds and of the interval where lies .proving for a fixed is a simple consequence of the classical results contained e.g. in .[ rem : alternate ] using numerical experiments , we will show in section [ sec : numerics ] that the rate given by for the estimation is indeed sharp .it is interesting to note that our result [ lem : cons ] in the present section does not explicitly require such a sharpness .a simpler alternate proof ( which we owe to one of the anonymous referees ) shows a similar , however non sharp estimation .that proof is based on the observation which shows that the function necessarily cancels on any interval .this leads to the following induction relation on the maximum value of on indeed , denoting a real such that , solution of satisfies , for all , hence .denote the integer part .the induction relation then implies and therefore , using that for all , }(t)-\frac1{1+\omega}, ] , satisfies moreover , we combine and and obtain this implies that is negative and therefore , using the bounds and respectively on and , the function is continuous , satisfies and so that by the intermediate value theorem , there exists }. \end{aligned}\ ] ] the scalar depends only on .additionally , the real part of the nonzero roots of satisfies and , combining and , therefore can be chosen as , for any .[ [ step - applying - lemmalemresidue ] ] step : applying lemma [ lem : residue ] from the previous step , we know that the only root of with real part strictly above is .we now apply lemma [ lem : residue ] with .since the root is a simple root of , the residue of at is .equation therefore writes proving that there exists which depends only on and , such that for all , therefore amounts to concluding the proof of lemma [ lem : cons ] .actually , we will show that this holds up to changing to in the right hand side , for any positive .this will conclude the proof .[ [ step - exponential - bound ] ] step : exponential bound we first show , for all , by integration by parts , we have }_{-t}^t.\end{aligned}\ ] ] introduce such that for all , then , for all , so that , for all , }_{-t}^t \right\vert } \le \frac{4{{\rm e}^}{-bt}}{{\left\vert t \right\vert}t}.\end{aligned}\ ] ] by passing to the limit in , we thus obtain . now , for all , introduce which we may take depending only on , such that for all , so that for and , this implies the function is continuous for } ] where .this ends the proof .now that we have proved the technical lemma [ lem : intt ] , we turn to the _ proof of theorem [ th : tle ] . _the proof is divided into six steps .the first step establishes a delay differential equation on a function , for which an explicit decomposition is known thanks to the lemma [ lem : varconst ] .we then rewrite in a different form whose terms are estimated in steps [ s : k0 ] and [ s : k1 ] . in the last two steps , we use these estimates to obtain and then . before we get to the proof we introduce some notation .the scalars are fixed and satisfy . in section[ sec : tlc ] , we have introduced which from bounds on satisfies we can therefore apply lemma [ lem : cons ] to the function satisfying with , so that there exist which depend only on , and such that and hold for all .notably , and are independent from ( and ) .[ [ step - applying - lemmalemvarconst ] ] step : applying lemma [ lem : varconst ] for a fixed , denote by and by the solution to with the initial condition .consistently with , let us also introduce where satisfies .then belongs to ( because and do , see lemma [ lem : dde ] ) and satisfies , for almost all introduce .we apply lemma [ lem : varconst ] and obtain ( using the same computations as in above and the fact that ) , [ [ step - rewriting - g_epsilon ] ] step : rewriting in order to rewrite we show that , for almost all , first , the function solution to with as initial condition satisfies , for all , denote a function of , a mollifier on and . inserting in yields , for sufficiently large such that , which rewrites the function belongs to ,l^1) ] , and converge to defined by as goes to infinity. moreover belongs to ( the proof is similar to the one in step [ s : espace ] of theorem [ th : exu ] ) so that is continuous in .passing to the limit in the above equation yields hence since and belong to .we then denote so that the expression on rewrites using the decomposition ( see ) that was established in lemma [ lem : cons ] .we now derive estimates on the two terms of the above expression , when .[ [ s : k0 ] ] step : estimate of introduce }}(\sigma ) { { \rm e}^}{\frac{\sigma + \sigma_c}{{\dot\gamma}(\theta)}}\end{aligned}\ ] ] which satisfies , in , }}(\sigma ) { { \rm e}^}{\frac{\sigma + \sigma_c}{{\dot\gamma}(\theta)}}\end{aligned}\ ] ] and }}(\sigma ) { { \rm e}^}{\frac{\sigma + \sigma_c}{{\dot\gamma}(\theta)}}.\end{aligned}\ ] ] for , take as a test function }}(\sigma ) { { \rm e}^}{\frac{\sigma + \sigma_c}{{\dot\gamma}(\theta ) } } \zeta^m_{[0,s)}(t)\end{aligned}\ ] ] in and pass to the limit in and then . here and in the following , denotes a function with compact support in , such that converges pointwise to .we omit the details , the arguments being similar to those in .we obtain with changes of variable and in the last two integrals , this rewrites let us assume that and introduce }}(\sigma ) { { \rm e}^}{\frac{\sigma+\sigma_c}{{\dot\gamma}(\theta ) } } { \rm 1\mskip-4mu l}_{(0,s-2\omega_\theta)}(t ) \nonumber\\&\quad + { \rm 1\mskip-4mu l}_{{\left[-\sigma_c,\sigma_c-{\dot\gamma}(\theta){\left(s - t\right)}\right]}}(\sigma ) { \rm 1\mskip-4mu l}_{(s-2\omega_\theta , s)}(t ) -{\rm 1\mskip-4mu l}_{[-\sigma_c,\sigma_c]}(\sigma ) \end{aligned}\ ] ] which satisfies , in , }}(\sigma ) { \rm 1\mskip-4mu l}_{(0,s-2\omega_\theta)}(t ) - \delta_{\sigma_c-{\dot\gamma}(\theta){\left(s - t\right)}}(\sigma ) { \rm 1\mskip-4mu l}_{(s-2\omega_\theta , s)}(t)\end{aligned}\ ] ] and }}(\sigma ) { \rm 1\mskip-4mu l}_{(0,s-2\omega_\theta)}(t ) \\&\qquad -({\dot\gamma}(\theta)-{\dot\gamma}(\epsilon t ) ) \delta_{\sigma_c-{\dot\gamma}(\theta){\left(s - t\right)}}(\sigma ) { \rm 1\mskip-4mu l}_{(s-2\omega_\theta , s)}(t).\end{aligned}\ ] ] we again use a regularization }}(\sigma ) { { \rm e}^}{\frac{\sigma+\sigma_c}{{\dot\gamma}(\theta ) } } \\zeta^m_{[0,s-2\omega_\theta)}(t ) \\&\quad + \rho^n*{\rm 1\mskip-4mu l}_{{\left[-\sigma_c,\sigma_c-{\dot\gamma}(\theta){\left(s - t\right)}\right]}}(\sigma ) \ \zeta^m_{(s-2\omega_\theta , s)}(t ) \\&\quad -\rho^n*{\rm 1\mskip-4mu l}_{[-\sigma_c,\sigma_c]}(\sigma ) \ \zeta^m_{[0,s)}(t)\end{aligned}\ ] ] and pass to the limit in in addition , from its definition , we know that satisfies ( see ) summing up expressions , and , we obtain taking and summing up the second and the third term ( with the change of variable in the third term ) , this rewrites using the lipschitz property of and -bound on , this implies for a constant , we have using estimate or variants , we find since , we obtain with a constant that is independent from and . throughout the rest of the proof below, we will likewise denote by such a constant , whose precise value may change from one occurrence to another .[ [ s : k1 ] ] step : estimate of inserting expression of ( see above for a similar computation ) , defined by satisfies for further use , notice that the integral in the second line above can be rewritten as : in order to rewrite the first term of the right - hand side , we use again the functions and respectively defined by and . using a regularization of as test function in ( with ) and then passing to the limit, we obtain this rewrites ( using similar computations as in above ) similarly , using a regularization of as test function in , we obtain so that ( using similar computations as in above ) we now perform the linear combinations : - ( + ) .the last two term of the right - hand side of cancel out with the right - hand sides of and ( using in particular ) so that using lemma [ lem : cons ] , satisfies ( this is a consequence of and ) so that ( using ): using the lipschitz property of , the -bound of and estimates and of and , we obtain using the estimate or variants and the bounds on , one concludes [ [ step - estimate - of - g_epsilonleftdfracthetaepsilonright ] ] step : estimate of using the decomposition and estimates and , we find moreover , the estimate established in theorem [ th : tlc ] yields , using the bounds on where is independent from and . by the definition of , we have so that where is independent from and .this concludes the proof of .[ [ step - estimate - of - p_epsilonleftdfracthetaepsilonsigmaright --- p_inftythetasigma ] ] step : estimate of recall that the scalars are fixed and satisfy . for all , denote where .the expression of , that was established in theorem [ th : exu ] , reads , for almost all such that , note that the condition is not restrictive because we are interested in the limit for a fixed .the rest of the proof depends on the value of .let us start with the case .we have we now apply lemma [ lem : intt ] with and and obtain notice that actually holds for all ( this will be used below ) . from the expression of , when .this gives for almost all .let us now consider the case ] , we deduce ( using again ) moreover , we have , from , for almost all ] and use as a test function in : we pass to the limit , using that the function belongs to ,l^1) ] as test function in and passing to the limit with the same arguments as in , we obtain consider the solution of the ordinary differential equation supplied with a scalar as initial condition . subtracting the above two equations and using that ( see ) and , we obtain with recall that and . applying the duhamel formula yields , using the upper bound, satisfies moreover , solution of satisfies using , the boundedness of and the estimate on , the right - hand side of satisfies , inserting the three above inequalities in implies additionally , applying the duhamel formula to and using the explicit formula yield using the lipschitz property of and the estimate , we obtain combining and leads to and eventually , respectively using and to estimate the two terms of the right - hand side . herewe have used the notation , and the fact that . [ [ step - approximation - of - tau_epsilon ] ] step : approximation of we now turn to . combining and yields from the formula on , we compute , , so that the term cancels out because of the definition of .we therefore obtain with we have ( see ) so that the duhamel formula then implies using the upper bound , satisfies moreover , the solution of satisfies ( using the non negativity of and ) : so that collecting established in theorem [ th : tle ] , and , the right - hand side of satisfies inserting the three above inequalities in implies we end this section with a discussion on the two macroscopic limits and we have obtained .first , as mentioned above , there are many ways to close the system in the limit .we have proposed here two possible macroscopic limits , which are indeed close up to terms of order to the original problem .second , we would like to argue that the system derived in theorem [ th : mac2 ] is physically more relevant .indeed , up to changing the coefficient defined by by a constant , this system belongs to a class of equations introduced in to model the evolution of aging fluids .these equations read ( see ( * ? ? ?* eq . ( 1 ) ) ) [ eq : derec ] align & = -f+ , [ eq : derectau ] + & = - u(f ) + v(f , , ) , [ eq : derecf ] where and are positive functions .the formal similarity between and is clear . for this class of systems ,the fluidity appears as the inverse of the relaxation time for the stress in equation . in equationthe evolution results from the competition between the two terms with opposite signs .aging , meaning solidification of the fluid , is modeled by the negative term .it makes the fluidity decrease so that the relaxation phenomenon is slower with time .the opposite effect , flow - induced rejuvenation , is modeled by the positive term , which makes the fluidity ( the inverse relaxation time ) increase .note that the assumption constant is a reasonable approximation when is small . in this case , system is close to system , which is indeed of the form . in section [ sec : numerics ] ,we present numerical results that confirm that the solutions to and are indeed close when is small .this section is devoted to some numerical experiments .we consider three different situations , depending on the value of the function for ] .since in theory it is posed on the whole real line , we need to truncate the domain and thus actually solve the equation on the bounded interval ] .the threshold value for the stress is .the time discretization is performed using a splitting method : over the time interval ] , with the time step .the values of , et are displayed on figures [ fig : directfk_f ] and [ fig : directfk_tau ] .we observe that the convergence is linear in , as predicted by our theoretical results theorems [ th : mac1 ] and [ th : mac2 ] .the macroscopic behavior is thus suitably reproduced , up to an error of size .\(iii ) our final test case addresses the case .we again rescale the time and consider .a similar experiment as that performed in the previous case ( ii ) again shows that equation reproduces well the stress tensor computed from the solution to equation , for the different values of . simulating and , we compute .the results are displayed on figure [ fig : directfk001_tau ] .but our purpose here is also to illustrate another fact .when is small , and it is indeed the case for our specific choice of in this case ( iii ) , the value of the parameter defined by and appearing in the macroscopic system is approximately 2 . system is thus close to the system as explained above , this system of differential equations belongs to the class of systems explicitly suggested in as a macroscopic system of evolution of and for a non newtonian aging fluid . our theoretical results of the previous sections can therefore be interpreted as a _ derivation _, from a model at a finer scale , of the macroscopic system , present in the applicative literature . on figure [ fig : directfk001_tau ], we indeed check that the stresses solution to the systems and are close , up to an error of size , to the stress provided by , | we study a one - dimensional equation arising in the multiscale modeling of some non - newtonian fluids . at a given shear rate , the equation provides the instantaneous mesoscopic response of the fluid , allowing to compute the corresponding stress . in a simple setting , we study the well - posedness of the equation and next the long - time behavior of its solution . in the limit of a response of the fluid much faster than the time variations of the ambient shear rate , we derive some equivalent macroscopic differential equations that relate the shear rate and the stress . our analytical conclusions are confronted to some numerical experiments . the latter quantitatively confirm our derivations . non - newtonian fluids ; micro - macro model ; longtime behavior . |
we revisit a classical problem : the transmission through a linear array of many identical slabs ( glass plates , plastic transparencies , or the like ) with random separation , as depicted in fig .[ fig : stack ] .the transmission probability that stokes derived in 1862 on the basis of ray - optical arguments ( thereby improving on an earlier attempt by fresnel in 1821 ; see refs . and for the history of the subject ) is not correct because there are crucial interference effects that require a proper wave - optical treatment .just that was given by berry and klein in 1997 who found that the average of the logarithm of the transmission probability through slabs is equal to times the logarithm of the single - slab transmission probability , here , is the probability of transmission through a single slab and denotes the transmission probability for slabs .its implicit dependence on the random phases that originate in the random spacing of the slabs is averaged over , indicated by the notation . as emphasized in ref . , the disorder is crucial ; without it , most wavelength components would be transmitted , and the stack should then appear rather transparent , but this is not the case as a simple experiment with a stack of transparencies demonstrates . identical slabs , each with single - slap transmission probability . the stack as a whole has transmission probability , which depends on the phases that result from the random spacing of the slabs .we are interested in , the transmission probability of the stack averaged over the phases . ]it is indeed common to average logarithms because they are known to be `` self averaging '' , and the exact result ( [ eq : logaver ] ) is truly remarkable .but one should realize what it tells us about the average transmission probability itself . as a consequence of the inequality the berry klein relation ( [ eq : logaver ] ) amounts to a lower bound on the average transmission probability , as we shall see below , this bound is not particularly tight because there is a very large range of individual values . in particular , we note that the ray - optics result is consistent with ( [ eq : bk - lowlim ] ) .it is the objective of the present contribution to report good wave - optics estimates for and closely related quantities .in particular , we will improve on the lower bound of ( [ eq : bk - lowlim ] ) and supplement it with an upper bound .we observe that the upper bound , when used as an approximation for , is unreasonably good and seems to give us the exact asymptotic values of quantities such as or . at present , this coincidence of the upper bound with exact asymptotic values is a poorly understood mystery .amplitudes on both sides of the slab .the unitary scattering matrix of ( [ eq : s ] ) relates the incoming amplitudes and to the outgoing amplitudes and , whereas the transfer matrix of ( [ eq : t ] ) connects the amplitudes on the left with the amplitudes on the right . ] for a wave of wavelength , the wave functions to the left and to the right of the slab are where is the position of the left edge and is the thickness of the slab ; see fig .[ fig : slab ] .the incoming amplitudes are related to the outgoing amplitudes by the unitary _ scattering matrix _ , where the entries of are restricted by which account for the single - slab transmission probability and the unitary nature of .the particular values of the complex phases of , , , and are of secondary interest , but we note that we have and for a completely transparent , non - scattering slab , for which . the _ transfer matrix _ is used to express the amplitudes on the right in terms of the amplitudes on the left , the one - to - one relation between and implies that the transfer matrix is of the form 0 & { \mathalpha{\mathrm{e}}^{\mbox{\footnotesize}}}\end{array}\right ) \left(\begin{array}{cc } \cosh\theta & \sinh\theta \\[1ex ] \sinh\theta & \cosh\theta \end{array}\right ) \left(\begin{array}{cc}{\mathalpha{\mathrm{e}}^{\mbox{\footnotesize } } } & 0\\ 0 & { \mathalpha{\mathrm{e}}^{\mbox{\footnotesize}}}\end{array}\right),\ ] ] where and , , are phase factors that have fixed values which , however , are largely irrelevant for what follows .the transfer matrix for the gap of length between the slab and the slab is the diagonal phase matrix 0 & { \mathalpha{\mathrm{e}}^{\mbox{\footnotesize}}}\end{array}\right)\,.\ ] ] phase matrices of the same structure sandwich the central -dependent matrix in ( [ eq : gent ] ) , so that we have \sinh\theta & \cosh\theta \end{array}\right)\ ] ] as a more useful way of writing .the product of two transfer matrices is another transfer matrix , whereby the relevant observation is the composition law with determined by and the phases and by whereas ( [ eq : gammas ] ) is of no consequence for the following considerations , the rule ( [ eq : newtheta ] ) is of central importance .we now turn to the situation of fig . [fig : stack ] , where we have identical slabs separated by gaps , , , that are not controlled on the scale set by the wavelength .therefore , we regard the phase factors as random with a uniform distribution on the unit circle in the complex plane . the over - all transfer matrix is characterized by which is obtained by repeated application of the composition rule ( [ eq : newtheta ] ) , whereby the phases have random values .each experimental realization of the -slab stack of fig .[ fig : stack ] has different values for these random phases , and the transmission probability varies from one experiment to the next .we need to average over the random phases to find . regarding the meaning of in ( [ eq : recurrence ] ) and ( [ eq : faver ] ) .the random phases , , , have already been averaged over , but the averaging over the phases , , is yet to be performed . ]let us consider a somewhat more general question : what is the average value of a function of , and thus of a function of ?when the averaging is carried out successively , first averaging over , then over , and so forth , finally over , we have an intermediate value after averaging over the first random phases ; see fig .[ fig : recursion ] . here denotes the value of for the stack of slabs to with its dependence on the remaining phases , ,we then have for the value prior to any averaging , and ( [ eq : newtheta ] ) tells us that we get from by means of and the integration covers any convenient interval of . eventually this takes us to when the recursive averaging over the random phases is completed . for illustration , we take as a first example .the recurrence relation ( [ eq : recurrence ] ) yields , so that or when stated in terms of transmission probabilities .a second illustrating example is , for which ^{n-1}f_1^{\ } ( c')\ , , \nonumber\\ f_n^{\ } ( c)&=&\frac{2}{3}\biggl[\frac{3}{2}f_1^{\ } ( c)\biggr]^n\,,\end{aligned}\ ] ] and ^n\ ] ] follows .taken together , ( [ eq:1stex ] ) and ( [ eq:2ndex ] ) tell us that the normalized variance of grows exponentially with the number of slabs , ^n\nonumber\\ & \approx&\frac{2}{3}\biggl[\frac{3}{2 } -\frac{1}{2}\bigl(\frac{\tau_1^{\ } } { 2-\tau_1^{\ } } \bigr)^2\biggr]^n \quad\mbox{for .}\end{aligned}\ ] ] the values of cover a correspondingly large range , and so we understand why the two sides of ( [ eq : logineq ] ) differ by much .this brings us to the much more important case of here , is a manifestation of the `` self - averaging '' of the logarithm ( not _ any _ logarithm though , but this particular one ) , and we get this is the berry klein result ( [ eq : logaver ] ) , of course .finally , we turn to calculating .the first few are giving and it is frustratingly difficult to go beyond .but it is possible to evaluate the recurrence relation ( [ eq : recurrence ] ) numerically and so determine . in passing ,we note that and for ; ray optics fails for . average transmission probability for a stack of identical slabs . for and ,the dotted curve ` a ' shows the values of computed by a numerical evaluation of the recurrence relation ( [ eq : recurrence ] ) , commencing with the small- functions of ( [ eq : firstfew ] ) .the crosses that follow curve ` a ' are values obtained by a monte carlo calculation that simulated experimental realizations .the two solid lines are the upper and lower bounds of ( [ eq : upperbound ] ) and ( [ eq : lower4 ] ) , respectively . the dashed line ` b ' is the lower bound ( [ eq : bk - lowlim ] ) derived by berry and klein . ] for , the outcome of such a computation is shown in the lin - log plot of fig .[ fig : logplot ] as the dotted curve ` a ' .the crosses near the curve were obtained by a monte carlo calculation in which experiments were simulated with up to slabs . the straight dashed line ` b ' is the lower bound of ( [ eq : bk - lowlim ] ) .the solid lines are the upper and lower bounds discussed in the next section .other values of result in plots with the same general features .since , we have where the second inequality recognizes that the integral in the first line is a monotonically increasing function of , so that the replacement increases its value .the integral defining is of elliptic kind and its value is less than if , that is : if .we conclude by induction that holds for .the upper bound then follows .the ray - optics result ( [ eq : rayoptics ] ) is inconsistent with this upper bound .figure [ fig : bounds ] shows as a function of . upper bound and lower bound on as functions of .the dashed straight line is the lower bound on of ( [ eq : bk - lowlim ] ) . ]we derive a lower bound by first observing that with \sqrt{\tau_1^{\ } -\tau_1 ^ 2/4 } & \tau_1^{\ } \leq2-\sqrt{2 } \end{array}\right.\ ] ] and then inferring by induction that holds for .the lower bound then follows .the plot of as a function of in fig .[ fig : bounds ] shows that for and , therefore , this lower bound is more stringent than ( [ eq : bk - lowlim ] ) , but it is not tight either .we are certain , however , that is bounded exponentially both from above and from below .figure [ fig : linplot ] illustrates the two bounds for .the values for curve ` a ' are obtained by the numerical evaluation of the recurrence relation ( [ eq : recurrence ] ) .clearly all values are well within the two bounds , the horizontal dashed lines .this figure , and analogous plots for other values of , suggest that the corresponding observation in fig .[ fig : logplot ] is that , for sufficiently large , line ` a ' there is parallel to the solid line for the upper bound . at present , ( [ eq : conject ] ) is no more than a conjecture that is supported by a body of numerical evidence .values of for .the bounds of ( [ eq : bounds ] ) are the two horizontal dashed lines .curve ` a ' shows the actual values .the extrapolation explained in the context of ( [ eq : conject ] ) and ( [ eq : extrapol ] ) gives curve ` b ' . ] some of the evidence is curve ` b ' in fig .[ fig : linplot ] .its values are obtained by an extrapolation that assumes that for large with and slowly varying with . for two consecutive values of curve ` a ' we can get an estimate of and , and curve ` b ' represents the successive values of thus extrapolated .the rapid and consistent approach of ` b ' to the horizontal line of the upper limit feeds the expectation that the conjecture ( [ eq : conject ] ) could be true .we leave the matter at that .we established the recurrence relation ( [ eq : recurrence ] ) that facilitates the calculation of the average value of any function of , the transmission probability through the stack of identical slabs with random gaps between them .we observed that the individual values of are spread over a large range and , therefore , exceeds by much .further , we derived strict upper and lower bounds on , both bounds being exponential functions of .the ray - optics prediction for is consistent with the lower bound but not with the upper bound .the upper bound , when used as an approximation for , is of much better accuracy than its derivation suggests and , based on numerical evidence , we conjecture that it is asymptotically exact .we are grateful for discussions with dominique delande .centre for quantum technologies is a research centre of excellence funded by ministry of education and national research foundation of singapore . | the transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability . we show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower bounds . the upper bound , when used as an approximation for the transmission probability , is unreasonably good and we conjecture that it is asymptotically exact . |
reactor neutrinos have been used for fundamental research since the discovery of neutrinos . the last decadehave witnessed a remarkable reduction of systematic error in reactor neutrino experiments , by about one of order of magnitude , imposed by the high precision needed to measure by double chooz ( dc ) , daya bay ( db ) and reno , a milestone for the world strategy of neutrino flavour research .reno has released several updates of the analysis in conferences here disregarded until publications follow .the reactor measurements are consistent with all measurements sensitive to obtained via other techniques providing a coherent perspective , as obtained by global fit analyses .since the reactor experiments precision is unrivalled , they are expected to dominate the world knowledge on , likely , for a few decades to go .hence , reactor systematic dominates much of the world knowledge , as experiments reach their final sensitivities .the measured ( and its uncertainty ) is expected to play a critical role to constrain , or measure , still unknown neutrino oscillation observables , such as cp - violation and the atmospheric mass hierarchy . to maximise the sensitivity to ,reactor experiments were forced to conceived experimental setups where flux , detection and background systematics are controlled to the unprecedented level of a few per - mille each contribution ( < 1% total ) .the statical resolution is boosted by using multi - reactors sites .the unprecedented precision achieved is experimentally very challenging , therefore the redundancy -experiments is critical , specially if their uncertainty budgets are complementary to offer maximal cross - validation . despite some complementary , the reactor experimental setups are unavoidably similar and suffer from similar limitations , hence validation by different techniques would be important , although the precision needed is unattainable today .the fore - mentioned precision improvement was obtained via multi - detector experimental setups , whereby , at least , two detectors are used for the reduction of the overall systematic budget since correlated systematics among detectors cancel out. this way , while the _ absolute systematics _ are the same , the _ relative systematics _ are much lower .the _ absolute systematics _ are still dominant in any single - detector setup , such as dc ( single detector ) , but also all past and , likely , most future reactor experiments .the systematics reduced by the multi - detector configuration are : detection and flux systematics .detection systematics benefit from dedicated detector design for them to be _ effectively identical _ , typically so , only upon full calibration , thus implying the same ( or negligibly different ) responses and composition ( cross - section , proton number , etc ) .flux systematics benefit from the fact that the _ near _ detector(s ) is located closer the reactor(s ) such that the flux modulation originating from neutrino oscillations , in this case , is negligible ( or very small ) compared to the _ far _ detector(s ) located further away .the far is placed at the expected maximal oscillation deficit driven by the ( atmospheric ) constraint by minos and t2k .the suppression of the flux systematic is highly non - trivial , being the main subject of this publication .while having effectively identical detectors suffices to reduce detection systematics , from .0% to .2% , just having near and far detectors will not necessarily provide a full cancellation of flux systematics - unlike what was originally thought .there are three mechanisms leading to possible flux systematic reduction to be elaborated in detail within the paper .first , multi - reactor uncorrelated uncertainties can benefit from having several identical reactors .second , the near - far geometry of the experimental setup could enhance ability for the near become an effective _ perfect monitor _ to the far ; i.e. cancelling fully the flux systematic error . and , third, the nature of the reactor uncertainties ; i.e. whether correlated or uncorrelated among reactors ; might be exploited , as measured by multiple detectors .reactor systematics suppress from single detector scenario , typically % to < 1% for multi - detector setups .dc achieves an impressive .7% for a single - detector setup using the bugey4 data for the mean cross - section per fission normalisation of the fission ; i.e. as an effective _ normalisation - only near_. the most accurate reactor flux anti - neutrino spectrum predictions rely on the ill uranium and plutonium isotopes input data .however , the final flux systematics are a combination of the reactor and spectral systematic errors and depends on depends on each experiment configuration , since the evolution of the fission elements depends on the running configuration of each reactor .therefore flux systematics are expected , with current knowledge , to be the dominant contribution for some of the reactor experiments , such as db , itself leading the world precision .this publication develops a framework for the non - trivial propagation of the reactor flux uncertainties and their suppression in the context of multi - detector and multi - reactor experimental setups , providing mechanism for the improvement of the global precision .most our discussion stays generic on flux systematic propagation ; i.e. no need for the specific experiment error breakdown , allowing easy relative comparison across all experiments .our study cases , however , inspired on the specific reactor- experiments configurations for maximal pertinence .the core of our calculations is analytical , but a cross - check was implemented using a dedicated monte - carlo - based analysis , is also presented .the discussion starts from the simplest single detector configuration , then evolving towards a general formalism applicable to any multi - detector and multi - reactor setups .the numbers linked to the specific experimental setups are however only guidelines as they are obtained in simplified scenarios , again , to maximise comparability .so far , the reactor -experiments appear to treat the reactor systematics similarly . however , all collaborations are lacking dedicated publications on the topic , so questions about the coherence across collaboration remain .below , we will briefly summarise the reactor flux systematics information , as provided by the different collaborations .most experiments rely on pwr reactors , hence the anti - neutrino flux is dominated by four isotopes : , , and ( ordered by contribution relevance ) . in general , flux systematics uncertainties are typically broken down into three terms : thermal power , fission fractions and spent fuel , as summarised in table [ tab - uncorpar ] .the thermal power history is usually provided by the electricity company exploiting the reactor .the associated error is related to the measurement method , the precision of the installed sensors and the employed sensors calibration .each experiment has to precisely study the way how the thermal power is measured in order to estimate the corresponding error and the possible correlations among all the reactors involved , as typically , the same measurement techniques are applied to all identical reactors in a power plant .the fractional fission rates have to be computed through reactor simulations , while their uncertainty estimation is not simple as they depend on the reactor model and the approximations used for each setup .after each cycle , the reactors are stopped for refuelling , typically once per year in a few weeks lasting operation . in this operation , part of the fuel assemblies are exchanged with new ones .the spent fuel are stored in the dedicated pools , typically located next to the reactor sites with slightly different baselines .the spent fuel long lived fission products can generate a small fraction of anti - neutrinos above the inverse beta decay threshold .therefore , each experiment has to evaluate also the contribution from spent fuel and its uncertainty . .reactor- experiments breakdown of detector flux 1 systematic error . the daya bay andreno errors quoted here are correspond to their uncorrelated contributions , as they relied on multi - detector setups .instead , the double chooz errors includes also the correlated contributions , as quoted from single - detector analysis .therefore , the double chooz error is expected to be overestimated and will be revised by the collaboration in future publications . thermal power( ) , fission fractions ( ) and spent fuel are considered . [ cols="<,^,^,^,^ " , ] [ tab - sf ] since both and are given by eq .[ eq : deffigen ] , we can use directly eq . [ eq_i2d2nr_final ] in order to get the expression for the corresponding sf . inserting the values of the geometrical parameters ,we show the variation of the sf relative weighting parameter in fig .[ fig - db_tune ] for fully uncorrelated errors of the reactor fluxes .the dotted curve represents the variation of the sf when all reactors are operational and the continuous curve represents the mean value between the values obtained when one reactor is off upon reactor refuelling scenario considered in the last section .the variations in such refuelling scenario are indicated by the grey area in fig .[ fig - db_tune ] accounting for the different reactors .there are two interesting cases to be highlighted = 1 : : : this point represents no relative weighting applied on the near sites , for which sf is .20 .the spread of sf when considering the reactor refuelling scenario is % . at minimum : : : the effective sf obtained is about .05 .the spread due to refuelling is about % .our result is consistent with the official result of db , where they obtained = 0.04 and = 0.3 for the analysed data period , considering that one of the near contains twice the events than the other , we obtained to be 3.38 . using mc, we also obtained respectively , 0.04 and 0.3 .our calculation here , thus serves as a replication .however , due to the difficulties on the physical interpretation of this , the corresponding sf is ignored any further in this publication .this is consistent with the fact that db discards it for the measurement of .we have identified , studied and quantified three mechanisms inducing reactor flux uncertainty suppression in the context of multi - detector experiments in multi - reactor sites .we have quantified the integral error suppression by the _ suppression fraction _ ( sf ) analytically ( cross - checked via mc ) using simplified experimental scenarios to allow coherent relative comparison across experiments .sf can take values within [ 0,1 ] , where the extreme cases stand , respectively , for total suppression ( sf=0 ) and no suppression(sf=1 ) .the three mechanisms can be characterised by their respective sf terms , since sf(total ) is defined as sf(total ) ( ) sf(iso - flux ) sf(correlation ) , where _ i _ )sf( ) is linked to scaling of the remaining uncorrelated reactor error , _ ii _ ) sf(iso - flux ) is linked to the site iso - flux condition and _ iii _ ) sf(correlation ) is linked to the nature of reactor errors .both sf(iso - flux ) and sf(correlation ) could lead to total error suppression under specific site conditions .if those terms are to be exploited , those conditions must be carefully evaluated and demonstrated by each experiments accounting accurately for all pertinent effects , although no experiment have ever done this .our final results relying simplified refuelling scenarios are summarised in table [ tab - sf ] .this results are not expected to be used by experiments , but as mere guidelines for more accurate estimations to follow up .the contribution of sf(correlation ) was not quantified for any specific experimental setup , as it deserves more careful treatment discussed below .the sf( ) has been actively exploited in publications by experiments to improve our knowledge on under the assumption that the remaining error is is fully reactor uncorrelated .typically , refers to the number of effectively identical reactors per site . in the case of db and reno ,this term amounts to % suppression ( 6 reactors ) , whereas this yields only % suppression for dc - ii .typically , sf( ) is propagated into the precision as a byproduct of the minimisation .the estimation of sf(iso - flux ) has not been applied by any experiment so far .it is likely impractically to be implemented via the minimisation formulation , instead calculations might follows the prescription here presented . once estimated , as demonstrated in this publication , the sf(iso - flux ) is expected to improve the so far published results by db and reno , providing an extra flux error reduction by up to % and % , respectively on . due to the simplified conditions assumed for our calculations ,our sf s are expected to be slightly optimistic relative to those to be obtained by dedicated analyses by db and reno eventually . in the casedc - ii , the sf(iso - flux ) term is expected to yield a dramatic % error reduction since the iso - flux condition is almost met .this makes dc - ii the only experiment likely to benefit from a negligible flux error , compared to other systematics .dc has officially prospected a conservative 0.1% as flux error for dc - ii , well within the analysis here presented .therefore , dc - ii final sensitivity is expected to be dominated its challenging background systematic , thus in maximal complementary to db error budget . since the flux error is dominant for db , the hereby presented error reduction by % translates into a significant improvement of the world precision by means of db alone , but also via the envisaged combination by all reactor experiments . ,is shown for dc - ii , reno and db .all experiments have assumed , so far , their errors to be reactor uncorrelated ; i.e. .dc - ii benefits the best sf ( .1 ) due to its almost iso - flux site . reno and db benefit mainly from the large number of reactor error suppression , but they also have some partial iso - flux matching hence benefiting from an extra up to error suppression so far neglected in publications . both full - reactor power ( dashed lines ) and a simplified refuelling scenario ( solid lines ) are shown , while the latter is expected to be a more accurate description of the reality . in the refuelling scenario , dc - ii benefits from the null reactor flux error whenever only one reactor is running , while reno and db have the expected opposite trend . ]the sf(correlation ) term has the potential to provide further flux systematic error suppression , regardless of the reactor site geometry , in the ( unexpected ) limit of full correlation across reactors , sf since the sf(correlation ) term cancels , as illustrated in fig .[ fig - all_1d ] . assessingthe correlation among reactor errors is a difficult subject , having today no specific common prescription or consensus .thus the only existing consensus is to adopt the most conservative scenario , implying two distinct cases : single - detector case : : : maximal sf(total ) is obtained if _ total correlation of reactor errors _ is assumed , as shown in fig .[ fig - dc_0near_2d ] .this is because both sf(correlation ) and sf( ) terms provide no error suppression ; i.e. sf=1 each .this the scenario is assumed for all dc - i publications and all single detector experiments , unless otherwise proved , thus affecting all past and and most future experiments .multi - detector case : : : maximal sf(total ) is obtained if _ total uncorrelation of reactor errors _ is assumed , as shown in fig .[ fig - dc_1near_2d]-right .this implies sf(correlation)=1 and sf()= , hence benefiting from maximal sf( ) reduction .this is the scenario expected to be assumed by db , dc - ii and reno , until otherwise proved . of course , reactor errors are unlikely to be neither totally correlated or totally uncorrelated .the fact that only those extreme cases are considered in the literate is a mere demonstration of the lack of knowledge for a better handling . beyond the current debate among experts on the subject , the promising exploitation of sf(correlation ) will require strong reactor - type dependences to be accounted and justified carefully by each experiment in dedicated publicationsthis means that each experiment will have to analyse their respective reactors , thus providing insight evidence of the error correlation behaviour .the thermal power contribution , typically indicated by , depends mainly on the uncertainty analysis of the internal reactor instrumentation data used for power estimation , as provided by the reactor running company .instead , the fission fraction evolution , indicated by , depends mainly on the simulation uncertainties analysis , including the assumptions and input parameters ( fuel configuration , etc ) used for the modelling and time evolution .both the instrumentation and simulations are very specific to each reactor type and , therefore , to the each experiment .therefore , dedicated analyses are needed by each experiment to justify the delicate exploitation of sf(correlation ) , in the same way that this publication aims to illustrates the realisation of the unprecedented sf(iso - flux ) exploitation . as a consequence of the unsettled complications behind the assessment of sf(correlation ) for each experiment , our description here remains generic in its description , as shown in fig .[ fig - all_1d ] , nonetheless , we illustrate and quantify the rationale behind for error suppression and its promising exploitation potential .our approach is also consistent with the fact that none of the experiments have so far provided detailed publications on the non - trivial reactor systematics analyses , including the quantitative justification of all assumptions used so far . in the context of the experiments ,a few references exist dc and db ( nothing yet available for reno ) illustrating a fraction of their reactor flux studies , but not dealing with the inter - reactor error correlation needed for the exploitation of sf(correction ) .dc is , however , finalising a dedicated publication on their reactor flux systematics quoted so far .thus , reactor flux error critical subject , despite its major impact to the final precision on , remains unfortunately somewhat obscure in today s literature .as time goes , the statistical error of reactor experiments is no longer dominant , the treatment of the systematics should be clearly laid well in advance to maximise the stability and reliability of the measurement , whose impact has critical implication transcending reactor neutrino results , affecting , for example , current searches for neutrino cp - violation .this publication provides reactor neutrino experiments with a coherent treatment for reactor flux systematic uncertainty for multi - detector experiments in multi - reactor contributions .we started with careful treatment of the single detector in a single reactor site scenario , being the most relevant case for most reactor experiments beyond .we have demonstrated that the challenging reactor flux systematic do not trivially cancel by the adoption of that multi - detector experiments .however , we have identified several means for error suppression in the context of double chooz , daya bay and reno experiments , using simplified scenarios to maximise relative comparison .we computed an integral error _ suppression fraction _( sf ) , which can be broken down into three components , defined as sf(total)(iso - flux)()(correlation ) , where sf( ) suppresses the uncorrelated error of identical reactors , sf(iso - flux ) suppresses the total error if the site geometry meets fully , or partially , the iso - flux condition and sf(correlation ) suppresses the error if the reactor errors are fully correlated .sf(iso - flux ) and sf(correlation ) could lead to total cancellation of the flux error .however , only sf( ) has been exploited to improve the precision , although total cancellation is impossible .this publication deals in detail on the calculation for sf(iso - flux ) , thus paving the ground for its exploitation , yielding two important observations .first , dc , once in its near+far configuration , is the only experiment expected to benefit from a negligible reactor flux error , thanks to the % iso - flux error suppression .second , daya bay and reno could also benefit from their partial iso - flux , thus yielding up to % flux error suppression .thus , this publication embodies a major improvement in the global precision of by improving the precision of all experiments measuring it , including current results .finally , we have highlighted the potential for a mechanism , currently neglected , for error suppression relying on further reactor error correlation insight , characterised by the sf(correlation ) term , further improving the precision of all multi - detector experiments .we would like to thank h. de kerret ( in2p3/cnrs - apc , france ) as well as c. buck ( mpik , germany ) , l. camilleri ( columbia university , usa ) , r. carr ( columbia university , usa ) , l. giot ( subatech , france ) and m. ishitsuka ( tokyo institute of technology , japan ) for suggestions and comments on the manuscript .this work was supported by the ief marie curie programme ( p. novella ) .99 y. abe _ et al . _ [ double chooz collaboration ] , improved measurements of the neutrino mixing angle with the double chooz detector , _ jhep _ * 10 * ( 2014 ) 086 . f. p.an _ et al . _ [ daya bay collaboration ] , improved measurement of electron antineutrino disappearance at daya bay , _ chin . phys . c _ * 37 * ( 2013 ) 011001 f. p. an _et al . _[ daya bay collaboration ] , spectral measurement of electron antineutrino oscillation amplitude and frequency at daya bay , _ phys .* 112 * ( 2014 ) 061801 j. k. ahn _ et al . _[ reno collaboration ] , observation of reactor electron antineutrino disappearance in the reno experiment , _ phys .* 108 * ( 2012 ) 191802 p. adamson _ et al . _[ minos collaboration ] , measurement of neutrino and antineutrino oscillations using beam and atmospheric data in minos _ phys .lett . _ * 110 * ( 2013 ) 251801 k. abe _ et al . _[ t2k collaboration ] , observation of electron neutrino appearance in a muon neutrino beam , _ phys .* 112 * ( 2014 ) 061802 m.c .gonzalez - garcia , m. maltoni , th .schwetz [ global fit analysis ] , updated fit to three neutrino mixing : status of leptonic cp violation , _ jhep _ * 1411 * ( 2014 ) 052 d.v .forero , m. tortola , j.w.f .valle [ global fit analysis ] , neutrino oscillations refitted , _ phys .d _ * 90 * ( 2014 ) 093006 f. capozzi , g.l .fogli , e. lisi , a. marrone , d. montanino , a. palazzo [ global fit analysis ] , status of three - neutrino oscillation parameters , circa 2013 , _ phys .d _ * 89 * ( 2014 ) 093018 y .- f .li _ et al ._ , unambiguous determination of the neutrino mass hierarchy using reactor neutrinos , _ phys .d _ * 88 * ( 2013 ) 013008 th .mueller _ et al ._ , improved predictions of reactor antineutrino spectra , _ phys .c _ * 83 * ( 2011 ) 054615 p. huber , determination of antineutrino spectra from nuclear reactors , _ phys .c _ * 84 * ( 2011 ) 024617 k. schreckenbach _ et al ._ , determination of the antineutrino spectrum from thermal neutron fission products up to 9.5 mev _ phys .lett . _ * 160b * ( 1985 ) 325 a. a. hahn _ et al ._ , antineutrino spectra from and thermal neutron fission products _ phys .lett . _ * 218b * ( 1989 ) 365 z. djurcic _et al . _ , uncertainties in the anti - neutrino production at nuclear reactors , _j.phys.g : nucl .* 36 * ( 2009 ) 045002 a. onillon , prediction of the reactor fission rates and the estimation of the associated uncertainties in the frame of the double chooz experiment , phd thesis , university of mines nantes ( 2014 ) [ https://tel.archives-ouvertes.fr/tel-01082405 ] an feng - peng _ et al . _ , systematic impact of spent nuclear fuel on sensitivity at reactor neutrino experiment _chinese phys .c _ * 33 * ( 2009 ) 711 m. drosg , dealing with uncertainties - a guide to error analysis , second enlarged edition _isbn 978 - 3 - 642 - 01383 - 6 , springer - verlag berlin heidelberg _ ( 2009 ) , 152 - 154 .jones _ et al ._ , reactor simulation for antineutrino experiments using dragon and mure .* 86 * 012001 ( 2012 ) .ma _ et al ._ , uncertainties analysis of fission fraction for reactor antineutrino experiments using dragon. arxiv:1405.6807 y. abe _ et al ._ [ double chooz collaboration ] , reactor flux systematic error for the double chooz experiment . _ in preparation_.in order to deliver the correlation coefficient for partially correlated uncertainties , we generalise the approach presented in . keeping the same notations , let us consider a measurement which depends on two variables and having the absolute uncertainties : and .we split one of them , for example , into its two components and representing respectively the totally uncorrelated and totally correlated both relative to .since the general error propagation formula is symmetric and the correlation is mutual , one can split whether or .the fraction between the correlated and uncorrelated components is given characterised by a constant such that . since ,the uncorrelated and correlated components are expressed as | this publication provides a coherent treatment for the reactor neutrino flux uncertainties suppression , specially focussed on the latest measurement . the treatment starts with single detector in single reactor site , most relevant for all reactor experiments beyond . we demonstrate there is no trivial error cancellation , thus the flux systematic error can remain dominant even after the adoption of multi - detector configurations . however , three mechanisms for flux error suppression have been identified and calculated in the context of double chooz , daya bay and reno sites . our analysis computes the error _ suppression fraction _ using simplified scenarios to maximise relative comparison among experiments . we have validated the only mechanism exploited so far by experiments to improve the precision of the published . the other two newly identified mechanisms could lead to total error flux cancellation under specific conditions and are expected to have major implications on the global knowledge today . first , double chooz , in its final configuration , is the only experiment benefiting from a negligible reactor flux error due to a % geometrical suppression . second , daya bay and reno could benefit from their partial geometrical cancellation , yielding a potential % error suppression , thus significantly improving the global precision today . and third , we illustrate the rationale behind further error suppression upon the exploitation of the inter - reactor error correlations , so far neglected . so , our publication is a key step forward in the context of high precision neutrino reactor experiments providing insight on the suppression of their intrinsic flux error uncertainty , thus affecting past and current experimental results , as well as the design of future experiments . |
a standard gaussian random variable is very unlikely to be large : ( cf .* lemma a.3 ) ) , but it is relatively likely to be small : by direct computation , in fact , for every finite collection of iid standard gaussian random variables , + analysis of the density of the distribution shows that where is the gamma function .similarly , for _ finitely _ many gaussian random variables , the asymptotic of is always algebraic in , as . on the other hand , for a standard -dimensionalbrownian motion , , that is , cf .* theorem 6.3 and corollary 3.1 ) or corollary [ cor : nbm ] below .transition from a finite to an infinite number of gaussian random variables typically leads to a qualitative change of behavior of small ball ( or lower tail ) probabilities : if then , as , the decay of is usually faster that polynomial in the logarithmic asymptotic is rather robust : if is the solution of the linear equation with a positive - definite matrix , then regardless of the matrix ; cf .* theorem 4.5 ) . in one - dimensional case , continues to hold even with some time - dependent drifts . a possible infinite - dimensional generalization of is the stochastic wave equation where is a two - parameter brownian sheet and is the corresponding space - time gaussian white noise .indeed , a change of variables reduces to with a different brownian sheet ; cf .* theorem 3.1 ) .equation can thus be considered an extension of to two independent variables in the spirit of ( * ? ? ?* section 7.4.2 ) ; according to , the small ball probabilities for and have similar asymtotics .so far , the paper appears to be the only work addressing the question of small ball probabilities for stochastic partial differential equations .the objective of the current paper is to investigate the asymptotic behavior of , where is the solution of the stochastic parabolic equation is a positive self - adjoint elliptic operator on a bounded domain , is space - time gaussian white noise , and is the scale of sobolev space generated by .an expansion of the solution of in eigenfunctions of leads to an infinite system of ordinary differential equations , making an infinite - dimensional version of .the results can be summarized as follows : for both and , as , where , , and are suitable numbers .for example , and .in particular , if , then the result is very similar to the finite - dimensional case .the details are below in theorem [ prop : cbm - sb ] ( ) and theorem [ th : main ] . throughout the paper , for , ,the notation means means and means .the variable can be discrete or continuous and the limiting value finite or infinite .we also fix , a stochastic basis satisfying the usual assumptions .let be independent identically distributed standard gaussian random variables and let be positive real numbers such that . by direct computation , and then tauberian theorems make it possible to connect the asymptotic of the right - hand side of as with the asymptotic of as .the most general result in this direction was obtained in : the function is defined implicitly by the relation and this implicit dependence on is the main drawback of in concrete applications .less precise but more explicit bounds are possible using exponential tauberian theorems , such as theorems [ thm : tauberian ] and [ thm : tauberian1 ] below ; they are modifications of ( * ? ? ?* theorem 3.5 ) ( which , in turn , is a modification of ( * ? ? ? * theorem 4.12.9 ) ) .[ thm : tauberian ] let be a non - negative random variable .then holds if and only if while is only _ logarithmic _ asymptotic of the probability and is not as strong as , it is usually more explicit than and is good enough in many applications .when holds , we say that the random variable has the small ball rate and the small ball constant occasionally , a more refined version of theorem [ thm : tauberian ] is necessary .[ thm : tauberian1 ] let be a non - negative random variable. then holds if and only if let be the solution of the equation with and , and assume that the initial condition is independent of the brownian motion and is a gaussian random variable with mean and variance .then where this follows by direct computation using ( * ? ? ?* theorem 3 ) ; see also ( * ? ? ? * lemma 17.3 ) when . for the standard brownian motion , with , and ,equality becomes the well - known cameron - martin formula : as an illustration of theorem [ thm : tauberian ] , let us confirm .[ cor : nbm ] if , is an -dimensional standard brownian motion , then by and independence of the components of , then follows from theorem [ thm : tauberian ] with and .let be a positive - definite self - adjoint elliptic operator of order on a bounded domain with sufficiently smooth boundary ; alternatively , can be a smooth closed -dimensional manifold with smooth measure .denote by the eigenvalues of , and by the corresponding normalized eigenfunctions .our main assumption is that the weyl - type asymptotic holds for : with constant depending only on the region ; see ( * ? ? ?* theorem 1.2.1 ) .for example , if on with zero boundary conditions , and is the lebesgue measure of , then and for [ that is , is a smooth compactly supported real - valued function on ] and , define then define the space as the closure of with respect to the norm . in particular , we also define and identify with a possibly divergent series note that for all .a cylindrical brownian motion on is a gaussian process , indexed by ] to with the following properties : 1 .there exists a such that 2 . for every , the process , ,is -adapted .3 . for every ,the equality holds in [ prop : he ] equation has a unique solution .moreover , for every , the result can be derived from general existence and uniqueness theorems for stochastic evolution equations , such as ( * ? ? ?* theorem 5.4 ) or ( * ? ? ?* theorem 3.1.1 ) ; below is an outline of a direct proof . taking in we find that is , then and by , if , then , and follows .similarly , and then kolmogorov s criterion implies that has a modification in . to establish uniqueness , note that the difference of two solutions satisfies the deterministic equation with zero initial condition .[ th : main ] as , where define by and , where the goal is to show that , as , and where after that , relations will follow from theorems [ thm : tauberian ] and [ thm : tauberian1 ] .we start by establishing .we then show that and are of lower order compared to : note that and recall that holds . if , then since , for every , the dominated convergence theorem implies therefore , for , when , define the function by , whereas implies with , uniformly in .define we will now show that to begin , let us establish the asymptotic of . with the notations if , then implies by lhospital s rule , for every therefore , as , if , then implies in particular , to establish , write where then will follow from we have because , for fixed , , , and the function has at most one critical point .if , then , and follows from .if [ which is possible when , then [ by balancing and , so that , with , to get a bound on , note that where is a suitable constant independent of . together with , inequality implies and the constant does not depend on . by integral comparison , and ,similar to the derivation of , which implies .the asymptotic of is now proved ; a more compact form of is it remains to establish .recall that cf ., , and . by definition , , which means because the function is decreasing for ] : small ball rate in , , for the solution ( bold curve ) and the noise.,title="fig : " ]if , is a sequence of real numbers such that for some and all , then , by , defines an equivalent norm on .when , we get immediate analogues of theorems [ prop : cbm - sb ] and [ th : main ] . let denote either or . for , direct computations show that either by [ when ] or by [ when ] , and then follows from theorem [ thm : tauberian ] . when , analogues of theorems [ prop : cbm - sb ] and [ th : main ] exist under an additional assumption about the numbers .[ prop : eqn ] assume that then , as , where for the solution of equation , where by and , to derive , it remains to replace in with . after a slightly more detailed analysis ,follows from in a similar way .note that if , then becomes . in the special case ,an alternative proof of proposition [ prop : eqn ] is possible using the results from ( * ? ? ?* example 2 ) ; for technical reasons , such a proof is usually not possible under the general assumption . without ( that is , assuming only ) , a precise logarithmic asymptotic of the small ball probabilities may not exist when , but the corresponding upper and lower bounds can still be derived .a generalization of is -cylindrical brownian motion defined by where is a non - negative symmetric operator on . if then , similar to , we set and consider the equation if furthermore then the question about the asymptotic of , is reduced to the corresponding question for the solution of the equation where .for example , if , that is , , , then assume that the initial condition in is independent of and has the form where are independent gaussian random variables with mean and variance .to ensure , condition must hold for all .in the finite - dimensional case , it is known ( * ? ? ?* theorem 4.5 ) that the initial condition may affect the small ball constant but not the small ball rate : if is a gaussian random vector independent of , then where may depend on the mean and covariance of . in particular , if the covariance matrix of is non - singular , then , that is , the initial condition does not change the small ball asymptotic at the logarithmic level .the corresponding results in the infinite - dimensional case are as follows .[ prop : inf - dim - ic ] assume that holds . 1 .if then holds .2 . if and for all , then the idea is to trace the contributions of the initial condition throughout the proof of theorem [ th : main ] . in particular , our objective is the asymptotic of as .by , the initial condition contributes an extra multiplicative term where also recall that , with zero initial condition , the dominant term is under , similarly , in other words , if the variance of the initial condition is strictly non - degenerate , then the initial condition does not affect the asymptotic of as .if and , then implies and , similar to the proof of , after that , theorem [ thm : tauberian ] implies .the stationary case requires special consideration ; cf . for one - dimensional ou process .assume that and . then holds .even though direct application of proposition [ prop : inf - dim - ic](1 ) is not possible because now and fails , very little changes in the actual proof : with only the term present , we see that still holds , that is , the initial condition does not affect the asymptotic of .if , then initial condition can affect the small ball rate .for example , assume that the initial condition in is non - random [ ] and , so that holds .using , , and , similar to derivation of , that is , with from . by theorem [ thm : tauberian1 ] with , , , which is very different from : the rate has an additional logarithmic term and the constant does not depend on .consider a linear operator on a separable hilbert space . if is symmetric and has a pure point spectrum and the corresponding eigenfunctions form an orthonormal basis in , then all the constructions from section [ sec : dspe ] can be repeated , and an analog of theorem [ th : main ] can be stated and proved for the evolution equation where is a cylindrical brownian motion on .the details depend on the asymptotic behavior of as .for example , consider the equation then and cf .* section 1.4 ) . since operator has order 2 , we define recall that the norm in the traditional sobolev space on is \!|_{\gamma}^2=\int_{-\infty}^{+\infty } fourier transform of .in particular , it follows that \!|_{\gamma}^2=\infty\ ] ] for every and every [ roughly speaking , because and each is an eigenfunction of the fourier transform ] , and consequently the solution of , , does not belong to any traditional sobolev space on . on the other hand ,similar to propositions [ prop : wn ] and [ prop : he ] , is the solution of .the corresponding small ball asymptotics can also be derived .the following relations hold as : where and where the case of follows from the asymptotic of similar to the proof of theorem [ prop : cbm - sb ] .the case of follows the same steps as the proof of theorem [ th : main ] .consider the equation with , and assume that the positive - definite operators and commute , have purely point spectrum and act in a scale of hilbert spaces , and where is the embedding operator .if , then the logarithmic asymptotic of the small ball probabilities is similar to finite - dimensional case : for both and .infinite - dimensional effects appear when : the small ball rate now depends on and can be arbitrarily large , whereas the small ball constant depends on the operator .w. v. li and q .- m .gaussian processes : inequalities , small ball probabilities and applications . in d.n. shanbhag and c. r. rao , editors , _stochastic processes : theory and methods _ , volume 19 of _ handbook of statist ._ , pages 533597 .north - holland , amsterdam , 2001 .j. b. walsh .an introduction to stochastic partial differential equations . in p.l. hennequin , editor , _ecole dt de probabilits de saint - flour , xiv , lecture notes in mathematics _ , volume 1180 , pages 265439 , springer , 1984 . | while small ball , or lower tail , asymptotic for gaussian measures generated by solutions of stochastic ordinary differential equations is relatively well understood , a lot less is known in the case of stochastic partial differential equations . the paper presents exact logarithmic asymptotics of the small ball probabilities in a scale of sobolev spaces when the gaussian measure is generated by the solution of a diagonalizable stochastic parabolic equation . compared to the finite - dimensional case , new effects appear in a certain range of the sobolev exponents . |
it is well - known that a wide variety of social networks have the so - called _ small world _property ; networks of size have diameter , meaning that between any two nodes there exists a path of size .this property is also shared by sparse random graphs ; however , random graphs lack several other important properties of real - world networks , such as clustering and heavy - tailed degree distributions ( see for a review ) .another property of social networks which distinguishes them from random graphs , and which has received somewhat less attention , is their _navigability_. that is , not only do short paths exist , but it is easy to find them using only local information .the pioneering work of milgram in the 1960s showed that people can , at least some of the time , find short paths to other distant people by pursuing the greedy strategy where they pass the message to whichever of their own acquaintances they feel is `` closest '' to the target recipient .this experiment has recently been repeated using e - mail networks . in a random graph , although a short path exists , a local algorithm must be lucky to find it as it can do little better than a random walk on the network .kleinberg addressed the issue of navigability by considering a small - world model consisting of a -dimensional lattice with long - range links added to it . unlike the watts - strogatz model in which the long - range links are uniformly distributed , in the kleinberg model pairs of nodes a distance apart are connected with some probability , where and the number of outgoing links per node is fixed . while a finite - dimensional lattice is obviously a gross oversimplification of the social spaces in which we live , or the conceptual spaces in which one web page seems `` closer '' to another , this model captures the essential features of the navigation problem : how can we minimize the routing time given local information about some metric ?kleinberg studied the performance of the greedy algorithm , in which each node passes the message to whichever of its neighbors , either local or long - range , that is closest to the destination on the underlying lattice .he showed that if this algorithm achieves a routing time of on lattices of size .however , if , the routing time grows as for some . note that if when , we integrate over spheres of radius , we find that the distribution of link lengths is , with a cutoff when reaches the system size ; this distribution provides the right mix of long- , medium- , and short - range links for the greedy algorithm to quickly `` zero in '' on its destination .similar results have been obtained for the case where the underlying graph is a tree , representing a hierarchical structure , or an overlapping set of trees representing multiple affiliations in society . while kleinberg s work largely characterizes theproperties a network needs to have to be navigable , it leaves open the question of how or why networks might evolve these properties . indeed , the topology of a social network is constantly being modified by its members . if , whenever these members find it difficult to search the network , they modify their own connections in an effort to make future searches easier , this dynamical process should make the network more navigable over time . in this paper , we consider a dynamic network model inspired by surfers on the web , each of whom controls the outgoing links from their home page .we start at a source node , and choose a random destination node . based on the ( metric ) distance between and , we set a threshold on the number of ( topological ) steps we feel the journey should take .if after steps we have not reached our destination , we stop searching , and rewire s long - range link to the place where we gave up . in the human - friendship network, this might correspond to gaining new acquaintances in the course of a search ; on the web , it corresponds to creating a bookmark / linking to the relevant pages , which other surfers can then use .our main result is that this process does indeed cause the distribution of link lengths to converge to a power law .the precise value of varies with lattice size , and differs somewhat from kleinberg s prediction .we believe this is due to finite - size effects ; to support this belief , we directly construct networks with power - law link length distributions ( as opposed to rewiring them ) , measure the exponent that minimizes the routing time , and find that it converges rather slowly to as , roughly as .however , even though the exponents differ somewhat , our rewiring process produces networks whose routing times match or improve those of kleinberg s optimum .other types of dynamical models have been studied ( e.g. ) .however , the model discussed here appears to be the first to optimize for navigability .one restriction of kleinberg s model of a social space consists of a -dimensional lattice of size , where each node is connected to its nearest neighbors and has a single long - range link to a node , chosen with where is the manhattan distance ( the norm ) .such networks are denoted in ; here the two indicate the radius of the local and the number of long - range links per node .our goal is to show that our rewiring process causes networks with a range of initial link length distributions to converge to this form , where and is close to kleinberg s prediction .we report here on where , for which we can feasibly study networks of size up to ; our results for are qualitatively similar . in each round of the rewiring process , we choose the source node uniformly , and choose the destination according to a _ demand distribution _ .one method of selection is to choose a distance according to a distribution , and then choose randomly from among the nodes that satisfy . here, we take to be uniform , which for means that is simply a uniformly random node ; we comment in the conclusions on the effect of other demand distributions .starting at , the greedy algorithm produces a path where is the routing time , and where for each , is the outgoing neighbor of that minimizes ( with ties broken randomly ) .the rewiring process works as follows : if , i.e. , if the greedy algorithm reaches some with , then we discontinue the search and change s outgoing link to point to . in our experiments ,we choose the threshold uniformly at random from the interval $ ] .such a naive selection avoids making assumptions about the network s size , performance or topology .in particular , it avoids assuming the routing time which both the rewired networks and kleinberg s networks achieve . for our initial conditions ,the link length distributions have exponent ranging from , a uniform distribution as in the watts - strogatz model , to , where all the `` long - range links '' are simply self - loops .we run rounds per node of the rewiring process ; that is , times we choose nodes , run the greedy algorithm from to and rewire if .we then compare the final link length distribution to kleinberg s optimum . after running the rewiring process until .the initial graphs had , i.e. , only self - loops . ] for a range of initial link length exponents , where . at increases, converges to regardless of the value of . ]figure [ fig : logbinned ] illustrates the resulting distributions for networks of size and .we see that after rounds ( we describe our choice of in the next section ) the rewiring process has built a power - law distribution of link lengths , with respectively .figure [ fig : variousa ] shows that a range of initial distributions converge to the same final distribution .starting with initial exponents ranging from to , the exponent of the rewired link length distributions all converge to ( here ) as the rewiring continues .while our rewiring process produces a power - law link length distribution , the exponent is noticeably different from the value ( more generally , ) which kleinberg proved in the limit .notably , figure [ fig : variousa ] shows that networks with an initial exponent actually move away from this value as they are rewired .this deflection appears to be due to finite - size effects , and in fact the exponent that minimizes the routing time on finite lattices turns out to be rather different from kleinberg s value even at reasonably large .we examined system sizes over six orders of magnitude , . for each size , we constructed networks with ranging from to , and measured the mean routing time over trials for each .we estimated as the minimum of the best quadratic fit of .figure [ fig : static - egfit ] illustrates for several values of and clearly shows that .the dashed line illustrates s slow approach to as increases .figure [ fig : static - rminasymptote ] shows the dependence of on ; it is fit rather closely by the form for .the fact that converges polylogarithmically , as opposed to polynomially , indicates that finite - size effects are quite severe .mean routing time for several network sizes .the dashed line shows the path , defined as the minimum of the best quadratic fit , follows as increases . ]finite - size dependence of for up to .the solid line shows a fit to , showing that converges polylogarithmically slowly to as . ]mean routing time for networks with .shown is a fit to of the form . ] however , while differs from ( and from ) our rewired networks achieve a mean routing time equivalent to networks with exponent , and better than those with exponent .[ fig : static - meanrouting ] compares the mean routing times for the three types of networks .all three are closely fit with a curve of the form , just as kleinberg s analysis predicts . of course , the routing time of the network depends on the number of rewiring rounds . in figure[ fig : relaxtime ] , we show the number of rounds per node required to achieve a mean routing time which is only greater than , i.e. , the routing time of a network with exponent .this _ rewiring time _ grows as rounds per node , or rounds total . ,i.e. , the number of rounds per node the rewiring process needs to achieve a routing time , within of that of a network with exponent .the solid line is a fit of the form . ]kleinberg explored the navigability of small - world networks built on an underlying -dimensional space . in the limit of infinite size , he found the mean routing time in minimized by a power - law of link lengths with exponent . here , we have explained how might develop such a distribution over time , by a decentralized rewiring process that relies only on local information ( and is even ignorant of the size of the network ) .this process has a natural interpretation : the topology of a social network is constantly being modified by its members , who update their personal connections as they explore and navigate the network .if a becomes frustrated because the journey to a destination takes too long , they can be expected to change their to make similar journeys more quickly in the future .our results show that rewiring causes a wide range of initial topologies to converge to a power - law distribution of link lengths , very similar to kleinberg s .the exponent we obtain differs significantly from , its optimal value on infinite lattices .we attribute this deflection to finite size effects which cause the optimal exponent to converge polylogarithmically as .however , the rewired network achieves the same mean routing time as power - law networks with exponent , and better routing times than those with . specifically , the mean routing time as a function of system size is as predicted by kleinberg s analysis .the number of rounds of the rewiring process needed to achieve this routing time grows as a low - degree polynomial of .in addition to creating and maintaining a power - law distribution of link lengths , we believe this rewiring process to be _adaptive_. for instance , we conjecture that if new nodes are added to the network , or if certain nodes or links are removed , it will dynamically optimize for these new situations .we also believe , based on preliminary results , that if the demand distribution is not uniform , e.g. if certain destinations are more popular than others , or if the source and the destination are correlated ( both of which are true in any real network ) , that it will optimize routing times for the source - destination pairs that appear more frequently .these adaptive properties would be particular useful in networks where nodes are constantly coming on- and off - line and where the demand of each destination rises and falls over time , such as live peer - to - peer networks , distributed sensor networks or massively parallel computers .our discussion contemplates a `` social space '' consisting of a finite - dimensional lattice , an obviously poor model for the complex social spaces we routinely navigate .an interesting study would be an analogous rewiring process for networks whose underlying structure is hierarchical , involves multiple group affiliations or is otherwise structured , as in the peer - to - peer network freenet with the modifications described in .we leave these as directions for future work .i. clarke , et al , `` freenet : a distributed anonymous information storage and retrieval system , '' in _ designing privacy enhancing technologies _ , lncs 2009 , h. federrath , ed ., springer - verlag , berlin ( 2001 ) . | networks created and maintained by social processes , such as the human friendship network and the world wide web , appear to exhibit the property of _ navigability _ : namely , not only do short paths exist between any pair of nodes , but such paths can easily be found using only local information . it has been shown that for networks with an underlying metric , algorithms using only local information perform extremely well if there is a power - law distribution of link lengths . however , it is not clear why or how real networks might develop this distribution . in this paper we define a decentralized `` rewiring '' process , inspired by surfers on the web , in which each surfer attempts to travel from their home page to a random destination , and updates the outgoing link from their home page if this journey takes too long . we show that this process does indeed cause the link length distribution to converge to a power law , achieving a routing time of on networks of size . we also study finite - size effects on the optimal exponent , and show that it converges polylogarithmically slowly as the lattice size goes to infinity . |
suppose that we observe equispaced samples ( at the nyquist sampling rate ) of a number of sinusoidal signals : denoted by matrix , on the index set , where is the number of equispaced samples per sinusoidal signal , , and .that is , we have measurement vectors corresponding to the columns of . here indexes the entries of , , ] by identifying the beginning and ending points .boldface letters are reserved for vectors and matrices .for an integer , \triangleq\lbra{1,\cdots , n} ] ,define the event on , is guaranteed to be invertible . then we introduce the partitions and , where , , and are of the same dimension .let and similarly define its deterministic analog , where denotes the derivative of and .let be a small positive number and a constant which may vary from instance to instance .assume .we have if [ lem : invertibility ] assume .let ] .we empirically find that which is the mid - point of the interval above .we first consider uncorrelated sources , where the source signals in ( [ formu : observmodel1 ] ) are drawn i.i.d . from a standard complex gaussian distribution .moreover , we consider the number of measurement vectors . for each value of and each type of frequencies ,we carry out 20 monte carlo runs and calculate the success rate of frequency recovery . in each run, we generate a set of frequencies and and obtain the complete data . for each value of , we attempt to recover the frequencies by the proposed atomic norm method , implemented by sdpt3 in matlab , based on the first columns of . given the frequency solution and _ mean amplitude _ ( see section [ sec : freqretriev ] ) , we denote by the vector of the largest amplitudes ( the rest by ) and by the corresponding set of frequencies. we may consider the rest of the frequencies in are spurious peaks with amplitudes .the recovery is considered successful if , root mean squared error ( rmse ) of frequency and maximum absolute error of amplitude , where denotes the true vector of amplitudes .our simulation results are presented in figs .[ fig : complete_l_uniformfreq ] and [ fig : complete_l_nonuniformfreq ] , which verify the conclusion of theorem [ thm : completedata ] that the frequencies can be exactly recovered by the proposed atomic norm method under a frequency separation condition . moreover , when we take more measurement vectors , the performance of recovery improves and it seems that a weaker frequency separation condition is sufficient to guarantee exact frequency recovery in this case of uncorrelated sources . by comparing fig .[ fig : complete_l_uniformfreq ] and fig .[ fig : complete_l_nonuniformfreq ] , we also observe that a stronger frequency separation condition is required in the case of equispaced frequencies where more frequencies are present and located more closely .it is worthy noting that with the equispaced frequencies the minimum separation at which the atomic norm method starts to succeed is close to the necessary separation condition in remark [ rem : necessaryseparation ] that .we next consider coherent sources . in this simulation , we fix and consider different percentages , denoted by , of the source signals which are coherent ( identical up to a complex scale factor ) .that is , refers exactly to the case of uncorrelated sources considered previously . means that all the sources signals are coherent and the mmv case is equivalent to the smv case . for each type of frequencies ,we consider five values of ranging from to and calculate each success rate over 20 monte carlo runs .our simulation results are presented in figs .[ fig : complete_cohsource_uniformfreq ] and [ fig : complete_cohsource_nonuniformfreq ] .it is shown that , as the percentage of coherent sources increases , the success rate decreases and a stronger frequency separation condition is required for exact frequency recovery .as equals the extreme value , the curves of success rate approximately match those at in figs .[ fig : complete_l_uniformfreq ] and [ fig : complete_l_nonuniformfreq ] , verifying that taking more measurement vectors does not necessarily improve the performance of frequency recovery . finally , we report the computational speed of the proposed atomic norm method .it takes about 11s to solve one sdp on average and the cpu times differ slightly at the three values of .about 22 hours are used in total to produce the data used in fig .[ fig : completedata ] . for incomplete data , we study the so - called phase transition phenomenon in the plane .in particular , we fix , and , and study the performance of our proposed atomic norm minimization method in signal and frequency recovery with different settings of the source signal .the frequency set is randomly generated with and ( differently from that in the last subsection , the process of adding frequencies is terminated as ) . in our simulation, we vary and at each , since it is difficult to generate a set of frequencies with under the aforementioned frequency separation condition . in this simulation , we consider temporarily correlated sources . in particular, suppose that each row of source signal has a toeplitz covariance matrix ( up to a positive scale factor ) .therefore , means that the source signals at different snapshots are uncorrelated while means completely correlated . in our simulation, we first generate from an i.i.d .standard complex gaussian distribution and then let , where we consider . for each combination , we carry out 20 monte carlo runs and calculate the rate of successful recovery with respect to each .the recovery is considered successful if , the relative rmse of data recovery , the rmse of frequency recovery and the maximum absolute error of amplitude recovery , where and the last two metrics are defined as in the previous simulation , and denotes the solution of .our simulation results are presented in fig .[ fig : phasetransition ] , where a transition from perfect recovery to complete failure can be observed in each subfigure .more frequencies can be recovered when more samples are observed .moreover , when the correlations between the mmvs , indicated by , increase , the phase of successful recovery decreases , and at the same time the phase transition becomes less clear . in the extreme casewhere , all the measurement vectors are completed correlated , which in fact is equivalent to the smv case . therefore , by comparing fig .[ fig : pt100 ] and the first three subfigures , we can conclude that the performance of frequency recovery improves in general when more measurement vectors are observed .it is also worth noting that corresponds to the complete data case considered in the previous simulation .we also plot the line in figs .[ fig : pt00]-[fig : pt90 ] and in fig .[ fig : pt100 ] ( straight gray lines ) which are upper bounds of the sufficient condition in theorem [ thm : al0_guanrantee ] for the atomic norm minimization .we see that successful recoveries can be obtained even above the lines , indicating good performance of the proposed atomic norm minimization method .again , we report the computational speed .it takes about 13s on average to solve each problem and almost 200 hours in total to generate the whole data set used in fig .[ fig : phasetransition ] . while this paper has been focused on the noiseless case ,one naturally wonders the performance of the proposed method in the practical noisy case .we provide a simple simulation as follows to show this .we set , with randomly generated , sources with frequencies of , and and powers of 2 , 3 and 1 respectively , and .the signals of each source are generated with constant amplitude and random phases .complex white gaussian noise is added to the observed samples with noise variance .we attempt to denoise the observed noisy signal and recover the frequency components by solving the following optimization problem : where , set to ( mean + twice standard deviation ) , upper bounds the frobenius norm of the noise with large probability .our simulation results of one monte carlo run are presented in fig .[ fig : noisycase ] .the smv case is studied in fig .[ fig : noisycase1 ] , where only the first measurement vector is used for frequency recovery .it is shown that the three frequency components are correctly identified by the atomic norm minimization method while music fails .the mmv case is studied in fig .[ fig : noisycase2 ] with uncorrelated sources , where both atomic norm minimization and music succeed to identify the three frequency components .the coherent source case is presented in fig .[ fig : noisycase3 ] , where we modify source 3 in fig .[ fig : noisycase2 ] such that it is coherent with source 1 .music fails to detect the coherent sources as expected while the proposed method still performs well .all of the three subfigures show that spurious frequency components can be present using the atomic norm minimization method , however , their powers are insignificant . to be specific , the spurious components have about of the total powers in fig .[ fig : noisycase1 ] , and this number is on the magnitude of in the latter two subfigures . since the numerical results imply that the proposed method is robust to noise , a theoretical analysis should be investigated in future studies .the proposed method takes about in each scenario .finally , it is worthy noting that the proposed method requires to know the noise level while music needs the source number .we studied the joint sparse frequency recovery problem in this paper which arises in practical array processing applications .we presented nonconvex and convex optimization methods for its solution and analyzed their theoretical guarantees under no or very weak assumptions of the source signals .our results extend the smv atomic norm methods and their theoretical guarantees in to the mmv case , extend the existing discrete joint sparse recovery framework to the continuous dictionary setting , and provide theoretical guidance for the array processing applications .while this paper is focused on the worst case analysis , it will be interesting to investigate in the future the average case under stronger assumptions of the source signals , in which numerical simulations of this paper suggest that a weaker frequency separation condition is sufficient for exact recovery as the number of measurement vectors increases .the vector - form heoffding s inequality in lemma [ lem : vectorheoffding ] is a corollary of the following matrix - form heoffding s inequality in .consider a finite sequence of independent , random , self - adjoint matrices with dimension , and let be a sequence of fixed self - adjoint matrices .assume that each random matrix satisfies then , for all , where and denotes the largest eigenvalue .we now prove lemma [ lem : vectorheoffding ] based on the _ dilation _ technique .in particular , the dilation of a vector is a self - adjoint matrix .it is not difficult to show that and .we let and , where is the row of with .it follows that are independent matrices of dimension with according to the assumptions of . by the aforementioned properties of the dilation, we have that and .so , we can choose and it follows that .we complete the proof by applying the matrix - form heoffding s inequality to :the proof is based on the following bernstein s polynomial inequality .let be any polynomial of degree on complex numbers with derivative . then, [ lem : bernstein ] on the event , we make the following decomposition for some and : the second term has been bounded in proposition [ prop : boundgrid ] .we next provide a upper bound for , while the same bound is applicable to under similar arguments . since is vector - valued and for any .we attempt to bound elementwise .denote the entry of .then we have for some constant following from by noticing that , where denotes the column of . viewing as a polynomial of of degree , we get by applying the bernstein s polynomial inequality that it follows that then we can select satisfying such that for any there exists a point satisfying that .consequently , we have , which together with ( [ formu : decomp ] ) gives that on finally , ( [ formu : mboundcontinu ] ) is a direct consequence of ( [ formu : mboundgrid ] ) by inserting that .when ( [ formu : mboundcontinu ] ) is satisfied , we have by proposition [ prop : boundgrid ] and lemma [ lem : invertibility ] .z. yang would like to thank gongguo tang for fruitful discussions .z. yang and l. xie , `` continuous compressed sensing with a single or multiple measurement vectors , '' in _ 2014 ieee workshop on statistical signal processing ( ssp ) , gold coast , australia , available online at https://dl.dropboxusercontent.com/u/34897711/ssp14.pdf_ , june 2014 .e. cands , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee transactions on information theory _ ,52 , no . 2 ,489509 , 2006 .d. l. donoho and m. elad , `` optimally sparse representation in general ( nonorthogonal ) dictionaries via minimization , '' _ proceedings of the national academy of sciences _ , vol .100 , no . 5 , pp . 21972202 , 2003 .l. hu , z. shi , j. zhou , and q. fu , `` compressed sensing of complex sinusoids : an approach based on dictionary refinement , '' _ ieee transactions on signal processing _ , vol . 60 , no . 7 , pp .38093822 , 2012 .z. yang , c. zhang , and l. xie , `` robustly stable signal recovery in compressed sensing with structured matrix perturbation , '' _ ieee transactions on signal processing _ , vol .60 , no . 9 ,pp . 46584671 , 2012 .d. malioutov , m. cetin , and a. willsky , `` a sparse signal reconstruction perspective for source localization with sensor arrays , '' _ ieee transactions on signal processing _ ,53 , no . 8 , pp . 30103022 , 2005 .s. f. cotter , b. d. rao , k. engan , and k. kreutz - delgado , `` sparse solutions to linear inverse problems with multiple measurement vectors , '' _ ieee transactions on signal processing _53 , no . 7 , pp . 24772488 , 2005 .r. gribonval , h. rauhut , k. schnass , and p. vandergheynst , `` atoms of all channels , unite ! average case analysis of multi - channel sparse recovery using greedy algorithms , '' _ journal of fourier analysis and applications _ , vol .14 , no . 5 - 6 , pp . 655687 , 2008 .j. m. kim , o. k. lee , and j. c. ye , `` compressive music : revisiting the link between compressive sensing and array signal processing , '' _ ieee transactions on information theory _ , vol .58 , no . 1 ,pp . 278301 , 2012 .z. yang , l. xie , and c. zhang , `` a discretization - free sparse and parametric approach for linear array signal processing , '' _ ieee transactions on signal processing , accepted with madatory minor revisions ( aq ) , available at http://arxiv.org/abs/1312.7695_ , 2014 .j. b. kruskal , `` three - way arrays : rank and uniqueness of trilinear decompositions , with application to arithmetic complexity and statistics , '' _ linear algebra and its applications _ , vol .18 , no . 2 ,pp . 95138 , 1977 .m. malek - mohammadi , m. babaie - zadeh , a. amini , and c. jutten , `` recovery of low - rank matrices under affine constraints via a smoothed rank function , '' _ ieee transactions on signal processing _ , vol .62 , no . 4 , pp .981992 , 2014 . | frequency recovery / estimation from samples of superimposed sinusoidal signals is a classical problem in statistical signal processing . its research has been recently advanced by atomic norm techniques which deal with continuous - valued frequencies and completely eliminate basis mismatches of existing compressed sensing methods . this work investigates the frequency recovery problem in the presence of multiple measurement vectors ( mmvs ) which share the same frequency components , termed as joint sparse frequency recovery and arising naturally from array processing applications . - and -norm - like formulations , referred to as atomic norm and the atomic norm , are proposed to recover the frequencies and cast as ( nonconvex ) rank minimization and ( convex ) semidefinite programming , respectively . their guarantees for exact recovery are theoretically analyzed which extend existing results with a single measurement vector ( smv ) to the mmv case and meanwhile generalize the existing joint sparse compressed sensing framework to the continuous dictionary setting . in particular , given a set of regularly spaced samples per measurement vector it is shown that the frequencies can be exactly recovered via solving a convex optimization problem once they are separate by at least ( approximately ) . under the same frequency separation condition , a random subset of regularly spaced samples of size per measurement vector is sufficient to guarantee exact recovery of the frequencies and missing samples with high probability via similar convex optimization . extensive numerical simulations are provided to validate our analysis and demonstrate the effectiveness of the proposed method . * keywords : * array processing , atomic norm , compressed sensing , direction of arrival ( doa ) estimation , frequency recovery / estimation , joint sparsity , low rank matrix completion , multiple measurement vector ( mmv ) . |
a _ robot swarm _ is a system of multiple autonomous mobile robots engaged in some collective task . in hostile environments , it may be desirable to employ large groups of low cost robots to perform various tasks cooperatively .this approach is more resilient to malfunction and more configurable than a single high cost robot . _swarm robotics _ , pioneered by c. w. reynolds , is a novel approach to coordinate a large number of robots .the idea is inspired by the observation of social insects .they are known to coordinate their actions to execute a task that is beyond the capability of a unit .the field of swarm robotics has been enriched by many researchers adopting different approaches for swarm aggregation , navigation , coordination and control .mobile robots can move in the physical world and interact with each other .geometric problems are inherent to such multiple cooperative mobile robot systems and have been well studied .multiple robot path planning , moving to ( and maintaining ) formation and pattern generation are some important geometric problems in swarm robotics. multiple robot path planning may deal with problems like finding non - intersecting paths for mobile robots .the formation and marching problems require multiple robots to form up and move in a specified pattern .these problems are also interesting in terms of distributed algorithms .formation and marching problems may act as useful primitives for larger tasks , like , moving a large object by a group of robots or distributed sensing .pattern generation in cellular robotic systems ( crs ) is related to pattern formation problem by mobile robots .this paper addresses a very well known and challenging problem involving robot swarms , namely _ gathering_. the objective is to collect multiple autonomous mobile robots into a point or a small region .the choice of the point is not fixed in advance .initially the robots are stationary and in arbitrary positions .gathering problem is also referred to as _ point formation _ , _ convergence _ , _ homing _ or _ rendezvous _ .a pragmatic view of swarm robots asks for distributed environment .several interesting works have been carried out by researchers on distributed algorithms for mobile robots . a simple basic model called _ weak model _ is popular in the literature .the world of the robots consists of the infinite plane and multiple robots living on it .the robots were considered dimensionless or points .all robots are autonomous , homogeneous and perform the same algorithm ( _ look - compute - move _ cycle ) . in _look _ state , a robot takes a snapshot of its surroundings , within its range of vision .it then executes an algorithm for computing the destination in _ compute _ state .the algorithm is same for all robots . in _ move _ state , the robot moves to the computed destination .the robots are oblivious ( memoryless ) .they do not preserve any data computed in the previous cycles .there is no explicit communication between the robots .the robots coordinate by means of observing the positions of the other robots on the plane .a robot is always able to see another robot within its visibility radius or range ( may be infinite ) .two different models have been used for robots movement . under the sym model ,the movement of a robot was considered to be instantaneous , i.e. , when a robot is moving , other robots can not see it . later , that model has been modified to corda model where the movement of a robot is not instantaneous .a robot in motion is visible .the corda model is a better representation of the real world .the robots may or may not be synchronized .synchronous robots execute their cycles together .in such a system , all robots get the same view . as a result, they compute on same data . in the more practical asynchronous model, there is no such guarantee . by the time a robot completes its computation ,several of the robots may have moved from the positions based on which the computation is made .the robots may have a common coordinate system or individual coordinate systems having no common orientation or scale .the problem of gathering multiple robots has been studied on the basic model of robotic system with different additional assumptions .prencipe observed that gathering is not always possible for asynchronous robots .however , instead of meeting at a single point , if the robots are asked to move very close to each other , then the problem can be solved by computing the center of gravity of all robots and moving the robots towards it . under the weak model ,robots do not have any memory .as they work asynchronously and act independently , once a robot moves towards the center of gravity , the center of gravity also changes . moving towards center of gravity does not gather robots to a single point .however , this method makes sure that the robots can be brought as close as one may wish .a possible solution to this problem is to choose a point which , unlike center of gravity , is invariant with respect to robots movement .one such point in the plane is the point which minimizes the sum of distance between itself and all the robots .this point is called the _ weber or fermat or torricelli _ point .it does not change during the robots movement , if the robots move only towards this point .however , the _ weber _ point is not computable for higher number of robots ( greater than 4 ) .thus , this approach also can not be used to solve the gathering problem . if common coordinate system is assumed , instead of individual coordinate systems , then the gathering problem is solvable even with limited visibility . if the robots are synchronous and their movements are instantaneous , then the gathering problem is solvable even with limited visibility .cieliebak et al . , show that there are solutions for gathering without synchronicity if the visibility is unlimited .one approach is by _ multiplicity _ detection .if multiple robots are at the same point then the point is said to have _ strict multiplicity_. however , the problem is not solvable for _ two _ robots even with _ multiplicity _ detection .there are some solutions for _ three _ and _ four _ robots . for more than _ four _ robots , there are two algorithms with restricted sets of initial configurations . in the first algorithm ,the robots are initially in a _ bi - angular configuration_. in such a configuration , there exists a point and two angles and such that the angle between any two adjacent robots is either or and the angles alternate .the second algorithm works if the initial configuration of the robots do not form any regular -gon .prencipe reported that there exists no deterministic oblivious algorithm that solves the gathering problem in a finite number of cycles for a set of robots ._ convergence _ of multiple robots is studied by peleg and cohen and they proposed a gravitational algorithm in fully asynchronous model for convergence of any number of robots .a dimensionless robot is unrealistic .czyzowicz et al., extend the weak model by replacing the point robots by unit disc robots .they called these robots as _fat robots_. the methods of gathering of three and four fat robots have been described by them .the paper considers partial visibility and presents several procedures to avoid different situations which cause obstacles to gathering .we proposed a deterministic gathering algorithm for fat robots .the robots are assumed to be transparent in order to achieve full visibility .having _ fat robots _ , we first define the _ gathering pattern _ to be formed by the robots .therefore , _ gathering pattern _ formation becomes a special case of _ pattern formation _ of mobile robots which is also a challenging problem for mobile robots .recent works shows that pattern formation and leader election are equivalent for robots .however , this work considered the robots to have common handedness or chirality .we propose a gathering algorithm which assumes no chirality and use the leader election technique in order to form gathering pattern .we also show that if leader election is possible then formation of the _ gathering pattern _ is possible even with no chirality .section [ model ] describes the model used in this paper and presents an overview of the problem .then we move to the solution approach .section [ char ] characterizes the geometric configurations of the robots for gathering .section [ algo ] presents the _ leader election _ and _ gathering _ algorithms .finally section [ con ] summarizes the contributions of the paper and concludes .we use the basic structure of _ weak model _ and add some extra features which extend the model . let be a set of fat robots .a robot is represented by its center , i.e. , by we mean a robot whose center is .the following assumptions describe the system of robots deployed on the 2d plane : * robots are autonomous .* robots execute the cycle ( _ look - compute - wait - move _ ) asynchronously .* robots are anonymous and homogeneous in the sense that they are not uniquely identifiable , neither with a unique identification number nor with some external distinctive mark ( e.g. , color , flag , etc . ) .* a robot can sense all other robots irrespective of their configuration . * each robot is represented as a unit disc ( _ fat robots _ ) .* corda model is assumed for robots movement . under this modelthe movement of the robots is not instantaneous . while in motion, a robot may be observed by other robots .* robots have no common orientation or scale .each robot uses its own local coordinate system ( origin , orientation and distance ) .a robot has no particular knowledge about the coordinate system of any other robot , nor of a global coordinate system .* robots can not communicate explicitly .each robot has a camera or sensor which can take picture or sense over 360 degree .the robots communicate only by means of observing other robots using the camera or sensor .a robot can compute the coordinates ( w.r.t .its own coordinate system ) of other robots by observing through the camera or sensor .* robots have infinite visibility range .* robots are assumed to be transparent to achieve full visibility .* robots are oblivious .they do not remember the data from the previous cycles .* initially all robots are stationary .let us define the _ gathering pattern _ which is to be formed by .the desired gathering pattern for transparent fat robots is shown in fig .one robot with center is at the center of the structure .we call it layer . robots at layer are around layer touching ( is a circle with center at and radius ) .robots at layer are around layer touching and so on . fig .1(a ) shows a gathering pattern with layers .[ gathering_pattern ] _ gathering pattern _ is a circular layered structure of robots with a unique robot centered at .a robot at layer touches and at least and at most robots at layer .the inner layers are full. the outer most layer may not be full but the vacant places in the outer most layers must be contiguous . according to the definition , a unique center is required to form a gathering pattern and the layers are created around the center . finding a unique robot and marking it as the center is not possible for , and robots .this is due to the symmetric structure ( fig .2 ) . for 2 or 3 robotsall robots are potential candidates to be the center . for 4 robotsthere are two such robots . to identify a unique robot for the center of the gathering pattern ,at least robots are required . for robots , a robot which touches the rest robotsis treated as the central robot or a robot at layer .1(b ) shows the desired gathering pattern with minimum robots .we are given a set , , of separated , stationary transparent fat robots .the objective is to move the robots so as to build the gathering pattern with the robots in . for gathering multiple robots ,our approach is to select one robot and assign it a destination .all other robots remain stationary till the designated robot reaches its destination . at each turn , the robot selected for motion is chosen in such a way , that during the motion if any other robot takes a snapshot and computes , this particular robot would be the only one eligible for movement . to implement this, we choose a destination carefully .the robot , nearest to the destination , is allowed to move first . however, a problem will occur when multiple robots are at same distance from the destination .a leader election mechanism is required to select a robot among such multiple eligible robots . to overcome this problem, our approach is to order a set of robots on 2d plane , with respect to the destination .the robots move one by one , according to this order , towards the destination .the mutual ordering of the robots in the set is invariant , though the size of the set changes .a set of robots on 2d plane is given .our objective is to move them , one by one , to form desire _gathering pattern_. in order to do so , we create an ordering among the robots and move one by one following the ordering .this ordering will be computed at different robot sites , who have different coordinate systems , orientations and origins .thus , we need the ordering algorithm to yield the same result , even if the point set is given a transformation with respect to origin , axes and scale . for this to succeed, no two robots should have the same view . keeping this in mind, we proceed to characterize the geometric configurations that is required for a set of robots to be orderable ( formal definition is presented shortly ) . in this sectionwe represent the robots by points .let be a non - empty set of points on the 2d plane . is a line on the plane .let be the points of ( not all ) , which lie on . partitions into two subsets and where or is non - empty .let be the straight line that intersects at point such that and is the middle point of the span of the points on .a point is said to be a straight mirror image of across , if is a mirror image of across ( figure [ mirrorimage](a ) ) .[ stmirrorimage ] let be the straight mirror image of across . is the straight mirror image of across . is said to be skewed mirror image of across ( figure [ mirrorimage](b ) ) .[ skmirrorimage ] a set of points on the 2d plane is said to be in straight - symmetric configuration , if there exists a straight line ( on that plane ) not containing all the points of , such that each point in has a straight mirror image in ( figure .[ symconfig](a ) ) .the line is called a line of straight symmetry .[ def - straight - symmetric - conf ] note that each point in is the mirror image of itself .a set of points on the 2d plane is said to be in skew - symmetric configuration , if there exists a straight line ( on that plane ) not containing all the points of , such that each point in has a skewed mirror image in .[ symconfig](b ) ) .the line is called a line of skew symmetry .[ def - skewed - symmetric - conf ] [ def - symmetric - conf ] a set of points on the 2d plane is said to be in symmetric configuration ( denoted by ) , if it is either a singleton set or in straight symmetric or skew symmetric configuration . for a singleton set , any line passing through the point is a line of symmetry .[ def - asymmetric - conf ] a set of points , which is not in symmetric configuration is in asymmetric configuration ( denoted by ) .our requirement does not stop at requiring the algorithm to be robust to changes in the coordinate system .the positions of the robots also change as the algorithm progresses .our objective is to order the points ( robots ) in such that when the robots move one by one , according to this order , towards the destination , the mutual ordering of the robots in the set is invariant .we define an orderable set as follows .[ def - orderable - set ] a set of points , on the plane , is called an orderable set , if there exists a deterministic algorithm , which produces a unique ordering of the points of , such that the ordering is same irrespective of the choice of origin and coordinate system .[ lemma - sym - ord ] let be a non - empty , non - singleton set of points .if is in , then is not orderable ._ proof : _ let be a line of symmetry ( straight or skewed ) for .let be the set of points from , lying on . divides into two halves and . and are mirror images ( straight or skewed ) of each other .let be a point in and in , the mirror image of .consider an arbitrary ordering algorithm .if we run on with as the origin , it produces an ordering of .let be the first point from that ordering such that is not on .on the other hand , if we run on , with as the origin , the symmetry tells us that the ordering obtained will have ( the mirror image of ) as the corresponding first point in the order . since , is not in , .since the choice of was arbitrary , no algorithm will produce the same order irrespective of the choice of origin .hence , is not orderable .[ lemma - los - passes - centersecp ] any line of symmetry of passes through the center of the smallest enclosing circle ( sec ) of and divides the points on sec into two equal mirror images ( straight or skewed ) . in order to check whether a set of points is in ornot , we need to find out if a line of symmetry exists for this set . for this search to be feasible ,we need to reduce the potential set of candidate lines for line of symmetry . in order to do so ,first the sec of is computed .the points on the sec are taken to form a convex polygon , say .a line of symmetry ( straight or skewed ) of cuts at two points .thus the line of symmetry contains at most two points from .[ hull - vertex - symmetry ] let be a set of points in .the mirror image ( straight or skewed ) of a vertex in is also in . _proof : _ consider a vertex of .let be the mirror image of across the line of symmetry .suppose , is not in .suppose , the line intersects at and intersects at . . and are two points on the 2d plane then the distance between and is represented by should also have a mirror image across , in the same side where lies .let be the mirror image of .note that also intersects at point and . if is inside the convex hull then ( figure .[ hullimage](a ) ) . hence , and .this implies that is not a hull vertex .contradiction ! on the other hand , if is outside the convex hull then ( figure .[ hullimage](b ) ) . hence , and .this implies that is not a hull vertex .contradiction ! therefore ,if is a hull vertex , must also be a hull vertex .if is in , then is in ._ proof : _ follows from lemma [ hull - vertex - symmetry ] . [ lemmmastraightsymhull ] if is in , then for any line of straight symmetry of , 1 .if passes through a vertex of , it bisects the interior angle at , 2 .if intersects an edge of , at a point other than a vertex , it is the perpendicular bisector of ._ proof : _ follows from the proof of lemma [ hull - vertex - symmetry ] . [ losks - intersect-1-vertex - edge ] any line of skew symmetry intersects either at two vertices or at two edges . [ losks - intersect-2-vertex - edge ] a skew symmetric polygon has even number of vertices and edges .[ lem - rectangle ] a pair of edges in a polygon inscribed in a circle is parallel and equal if and only if they are opposite sides of a unique rectangle inscribed in that circle ._ proof : _ * if : * trivial .* only if : * suppose and are two edges of a polygon such that and ( fig . [ skew1 ] ) . is the intersection point of the lines and .it is easy to see that .so , and .this means that the cords , and bisect each other .hence , and are both diameters of the circle .therefore , .this implies that is a rectangle .[ obs - parallel - edge - equal ] a polygon , inscribed in a circle , is skew symmetric if and only if each edge of the polygon has a parallel edge of equal length . _proof : _ * if : * suppose and , are parallel and equal edges of ( fig [ skew ] ) .let us add and by a straight line .we shall show that is a line of skew symmetry for . is the skewed mirror image of across .let be the adjacent edge of .we add . since , is a diameter of the circumcircle of ( lemma [ lem - rectangle ] ) , degree .we draw the rectangle . by lemma [ lem - rectangle ], is the edge of the polygon which is parallel to and equal in length with . by repeating this argumentit can be shown that , polygonal chains on both sides of are skew symmetric .* only if : * let be a skew symmetric polygon . is a line of skew symmetry for . partitions into two halves namely , and .first consider the case when passes through two vertices of , namely and .( fig [ skewsymhfig ] ) .suppose , is the edge incident at in and is the edge incident at in .since , is a line of skew symmetry , is the skewed mirror image of .therefore , is denoted by and .let be the edge adjacent to in and the edge adjacent to in .similarly , is the mirror image of .therefore , and . in this manner, we can find a parallel and equal edge of every edge . now consider the case when intersects two edges of , namely and at point and respectively ( fig [ skewsymhfig ] ) .if we consider a modified polygon with additional vertices at and , the result follows from the previous case .suppose , is a skew symmetric polygon .let be a line intersecting two vertices of , namely and .let be the interior angle of at vertex and be the interior angle of at vertex . divides into and and into and ( fig [ skewsymhfig ] ) .suppose , is an edge incident at in and is an edge incident at in . is the edge adjacent to in and is the edge adjacent to in .[ losks - vertex ] for a skew symmetric polygon , is a line of skew symmetry if and only if or ._ proof : _ * if : * if , . from lemma [ obs - parallel - edge - equal ] ,there exist an edge such that and . as is convex and implies and are the same .hence , . using an argument similar to that used in the proof of lemma [ lem - rectangle ], it can be shown that , polygonal chains on both sides of are skew symmetric .similarly , if , polygonal chains on both sides of are skew symmetric .hence , is a line of skew symmetry . * only if : * follows from the definition of skew symmetric polygon .let be a line intersecting two edges of , namely and at point and respectively ( fig [ skewsymhfig ] ) .[ losks - edge ] for a skew symmetric polygon , is a line of skew symmetry if and only if , and . _proof : _ * if : * , and implies that is the skewed mirror image of and is the skewed mirror image of across . as , using an argument similar to that used in the proof of lemma [ lem - rectangle ] , it can be shown that , polygonal chains on both sides of are skew symmetric .hence , is a line of skew symmetry . * only if : * follows from the definition of skew symmetric polygon . in order to check whether a set of points is in ornot , we first compute the sec of .the convex polygon , as described earlier , is also computed . for each vertex and each edge of , we look for a line of symmetry ( straight or skewed ) passing through that vertex or edge .since , we want the ordering to be the same for any choice of origin and axes , we can only use information which are invariant under these transformations .examples of such properties are , distances and angles between the points . the distances may be affected by the choice of unit distance , but even then their ratios remain the same .one possible solution is to select the robot closest to the destination as the leader or the candidate to move .it also satisfies our extra requirement that it remains the point closest the destination , and hence the leader , as it moves .robots equidistant from the destination are on the circumference of a circle with the destination as its center .they also form a convex polygon .let be a set of robots forming such a convex polygon inscribed in a circle .for the rest of this paper , by _ convex polygon _ we mean such polygons which are inscribed in a circle .the center of the circle is called the center of the polygon .[ def - straight - symmetric - poly ] a convex polygon is _ straight symmetric _ if the set of vertices in is in straight symmetric configuration .[ def - skew - symmetric - poly ] a convex polygon is _ skew symmetric _ if the set of vertices in is in skew symmetric configuration .[ def - symmetric - poly ] a convex polygon is symmetric if it is either straight symmetric or skew symmetric .[ def - asymmetric - poly ] a polygon , which is not symmetric is asymmetric . note : a single point on a circle is a special case .it is symmetric .though , any line passing through it is a line of symmetry , we shall only call the line passing through the center of the circle ( and the point itself ) as the line of symmetry .[ obs - asym - poly ] the set of vertices of an asymmetric polygon is in .lem - skew - los ] any straight line passing through the center of a skew symmetric polygon , is a line of skew symmetry for that polygon . _proof : _ let be a skew symmetric polygon .let and be two parallel and equal length edges of ( figure . [ skew2 ] ) .let and be the lines passing through - and - respectively . from lemma[ lem - rectangle ] , it follows that and pass through the center of and they are lines of skew symmetry of .now it is sufficient to prove that any line passing through the center and intersecting is a line of skew symmetry for .without loss of generality let us rotate by some angle , around , keeping it between and .let be the new position of the line .following lemma [ losks - edge ] and using an argument similar to that used in the proof of lemma [ lem - rectangle ] , it can be shown that , polygonal chains on both sides of are skew symmetric .hence , is a line of skew symmetry . for a convex polygon , a line of symmetry ( straight or skewed ) intersects the polygon at two points .the points can be two vertices or they may lie on two edges or one point may be a vertex and the other point lies on an edge ( figure .[ symmetric ] ) .let intersect at and .suppose has vertices .the vertices are labeled starting from the vertex next ( clockwise ) to up to the previous vertex of , as . if is a vertex , then . if lies on an edge , .similarly , the vertices are labeled starting from the vertex next ( clockwise ) to up to the previous vertex of , as . if is a vertex , then . if lies on an edge , .the following notations are used in the rest of this paper .* .* .* . * . iff for ( ) .[ lemma - straight - sym ] let be a straight symmetric polygon .a straight line is a line of straight symmetry for if and only if or ._ proof : _ follows from definition [ def - straight - symmetric - conf ] and definition [ def - straight - symmetric - poly ] .[ lemma - skewed - sym ] let be a skew symmetric polygon and a line of skew symmetry . divides into two skewed mirror image parts if and only if or ._ proof : _ follows from definition [ def - skewed - symmetric - conf ] and definition [ def - skew - symmetric - poly ] . [ obs - asymmetric - polygon ] a polygon is asymmetric if and only if all of the following conditions are true 1 . for .2 . for and .3 . for and note : and are equivalent .theorem 1 can also be written as follows .* theorem 1.1 * : a polygon is asymmetric if and only if all of the following conditions are true 1 . for .2 . for and .3 . . for and [ lem - asym - ord ]an asymmetric polygon is orderable ._ proof : _ let be an asymmetric polygon .for each vertex ( ) , we compute the tuple and take the lexicographic minimum of the tuples as the ordering of . since is asymmetric , for any and , for any and such that ( theorem [ obs - asymmetric - polygon ] ) .hence , the ordering is same irrespective of the choice of origin or the coordinate axes .thus , the set of vertices of an asymmetric polygon is orderable ( definition [ def - orderable - set ] ) . a set of points which are equidistant from the destination , form a convex polygon such that the vertices of the polygon are on the circumference of a circle .there can be multiple such sets , i.e. , the robots within a set are equidistant from the destination but the robots in different sets are not equidistant from the destination . in such a case , we get multiple concentric circles each of them enclosing a convex polygon . let = , be the convex polygons whose union is the whole set of points .the vertices of each polygon in are on the circumference of a circle .these circles are concentric .the center of the circles is known . generally , the center is the center of the sec formed by the points in . is also considered as the center of any polygon .elements of are sorted according to their distances from the center .any polygon can be extracted by selecting the equidistant points from the center .the polygon at level ( ) is the closest and the polygon at level ( ) is the farthest .[ def - sym - pair ] a pair of polygons and (denoted by ) in is called a _symmetric pair _ , if and have a common line of symmetry .a pair is _ asymmetric _ if it is not symmetric .[ obs - pair - asym - ord ] if any of the polygons of a pair is asymmetric , then the pair is asymmetric .[ lem - sym - notord ] a symmetric pair is not orderable . _proof : _ suppose is a symmetric pair . and have a common line of symmetry ( straight or skewed ) , which divides and into two equal halves ( straight or skewed mirror image ) .the union of the vertices of and is divided into two equal halves by .the union of the vertices of and is in .hence , is not orderable ( lemma [ lemma - sym - ord ] ) .suppose , the vertices of in pair are projected radially on the circumference of the enclosing circle of ( ) .construct a polygon with the full set of vertices on the circle .[ obs - merg - asym - pair ] if is an asymmetric pair , then the polygon is asymmetric .[ lem - asym - pair - ord ] an asymmetric pair is orderable . _proof : _ let be an asymmetric pair . is an asymmetric polygon ( observation [ obs - merg - asym - pair ] ). therefore is orderable ( lemma [ lem - asym - ord ] ) .therefore is orderable .[ def - sym - calg ] is called symmetric ( or is in ) if all polygons in have a common line of symmetry .[ lem - all - pair - asym - ord ] if there exists an asymmetric pair in , then is orderable . _proof : _ let in be the asymmetric pair such that is lexicographically minimum among all pairs in which are asymmetric . if has one pair , i.e. , = and is asymmetric , then is orderable ( lemma [ lem - asym - pair - ord ] ) . consider the case when has more than one pair .since is asymmetric , it is orderable ( lemma [ lem - asym - pair - ord ] ). let and be two orderings of and respectively .let ( ) be any polygon in .since , is asymmetric , is asymmetric ( observation [ obs - merg - asym - pair ] ) .hence , is asymmetric and is orderable ( lemma [ lem - asym - pair - ord ] ) .let be an ordering of .varying from to ( ) , we get the ordering of polygons in terms of respectively .hence , we get the ordering of .[ lem - one - pair - asym - ord ] if contains at least one asymmetric polygon then is orderable . _proof : _ let be asymmetric . for all , is asymmetric ( observation [ obs - pair - asym - ord ] ) .hence , following lemma [ lem - all - pair - asym - ord ] , is orderable .let , , and in be pairwise symmetric .let , , be the lines of ( same ) symmetry of , and respectively .our aim is to show that , and have a common line of symmetry . to show this, we first characterize the polygon on the basis of symmetry they have .the following lemma states a very interesting property for a convex regular polygon .[ lemma - sym - regular ] if a polygon is convex and has more than one line of symmetry such that each line of symmetry passes through at least one vertex of , then is regular . and passing through the vertices and respectively.,width=188,height=226 ] _ proof : _ let be a line of symmetry for passing through the vertex of .let the vertices starting from the vertex next to in clockwise direction be .the angular distances between the vertices of , starting from in clockwise direction , are denoted by ( fig .[ mod ] ) .since is a line of symmetry for , i.e. , suppose there is another line of symmetry passing through the vertex , .this implies , .we can represent the series by the following equations : let us consider any angle for . combining equations [ 2nd ] and [ 3rd ]we get , . substituting in equation get . therefore , for .this implies that , is regular .let us define different types of polygon on the basis of symmetry as follows . *a * type 0 * polygon is a regular convex symmetric polygon . * a * type 1 * polygon ( fig .[ type3 ] ) is a convex , symmetric , non - regular polygon with even number of vertices and has exactly two lines of straight symmetry and , such that * * ( ) does not pass through a vertex of the polygon . * * .+ it does not have any other line of straight symmetry but admits lines of skew symmetry . *a * type 2 * polygon ( fig . [ type3 ] ) is a convex , symmetric , non - regular polygon with even number of vertices and has exactly one line of straight symmetry passing through either two vertices or two edges . *a * type 3 * polygon ( fig . [ type3 ] ) is a convex , symmetric , non - regular polygon with odd number of vertices and has exactly one line of straight symmetry passing through a vertex and an edge . *a * type 4 * polygon ( fig .[ type3 ] ) is a convex , symmetric , non - regular polygon with odd number ( not prime ) of vertices such that the number of lines of symmetry is more than one but less than the number of vertices .+ note : the number of lines of symmetry for this polygon is odd .[ obs - exh - poly ] the above characterization of straight symmetric polygons is exhaustive .[ obs - sym - type-0 ] if is a type 0 polygon with even number of vertices then any straight line passing through the center of is a line of symmetry for .[ obs - sym - type-01 ] if is a type 0 polygon with odd number of vertices then any line of symmetry must pass through exactly one vertex of .[ lem - sym - type-01 ] any straight line passing through the center of a type 1 polygon , is a line of symmetry ( skewed or straight ) for that polygon . _proof : _ let be a type 1 polygon . has exactly two lines of straight symmetry say and . and pass through the center of .it is sufficient to prove that any line between and is a line of skew symmetry for . without loss of generalitylet us rotate by some angle , around , keeping it between and .let be the new position of the line . intersects either at two edges or passes through two vertices .if passes through two edges of , then using lemma [ losks - edge ] and using an argument similar to that used in the proof of lemma [ lem - rectangle ] , it can be shown that , polygonal chains on both sides of are skew symmetric . if passes through two vertices of , then using lemma [ losks - vertex ] and using an argument similar to that used in the proof of lemma [ lem - rectangle ] , it can be shown that , polygonal chains on both sides of are skew symmetric . hence , is a line of skew symmetry .[ obs - gcd ] let and be two polygons with and vertices respectively . and are odd numbers . if , has one common line of symmetry , then , has many common lines of symmetry .let and ( ) be two polygons in with number of vertices and respectively such that * , are odd .* does not divide or does not divide . * . has at least one common line of symmetry .we construct , and replace both and by in .following observation [ obs - gcd ] , has many common lines of symmetry .[ lem - los - gij ] a line of symmetry for is a common line of symmetry for . _proof : _ the number of lines of symmetry of is , where and are the numbers of vertices of and respectively .the angle between two adjacent lines of symmetry for is always degrees .the angle between two adjacent lines of symmetry of is degrees , which divides degrees .this implies that these lines of symmetry of are also the lines of symmetry for . using similar argumentit can be stated that the lines of symmetry of are also the lines of symmetry for .hence the result follows .[ theorem - los - g1n ] a line of symmetry for is a common line of symmetry for . _proof : _ the statement is true for two concentric polygons ( lemma [ lem - los - gij ] ) .suppose the result is true for polygons. now polygon is introduced .if has even number of vertices then any common line of symmetry for , which will pass through the center of the polygons , is also a line of symmetry for .hence the result is true .suppose has odd number of vertices .let be a line of symmetry for . from lemma [ lem - los - gij ], is a common line of symmetry for and .we merge and to get .let be a line of symmetry for . from lemma [ lem - los - gij ], is a common line of symmetry for and .again , is a common line of symmetry for and .proceeding further in this manner it can be shown that if is a line of symmetry for then is a common line of symmetry for . [ obs - type 4 ] if and are odd , has a common line of symmetry .additionally , if , then is a type 4 polygon .we repeat this process of merging polygons until every symmetric polygon with odd number of vertices becomes either of type 0 or type 3 or type 4 .we get a modified version of , and call it .[ lem - type4 ] let , and be three polygons in having odd number of vertices , and respectively . no two of , and are equal and no one of , and is a multiple of other . if , and are pairwise symmetric then , and are prime to each other ._ proof : _ suppose , and are not prime to each other . without loss of generality ,let .hence , and can be merged to form , which is a type 4 polygon .this contradicts the construction process of .[ lem-3-pair - sym ] any three polygons , , and in are pairwise symmetric if and only if , and have a common line of symmetry ._ proof : _ * if : * trivial . * only if : * since every pair is symmetric each individual polygon of , and must be symmetric .each polygon is either skew symmetric or type 0/1/2/3/4 polygon . let , and be three common lines of symmetry for , and respectively . , and pass through the common center ( ) of , and . *if any of the polygons , and ( without loss of generality let it be ) is a skew symmetric polygon , then any line passing through is a line of symmetry for ( lemma [ lem - skew - los ] ) .as passes through , it is also a line of symmetry for .therefore , is a common line of symmetry for , and . *if any of the polygons , and ( without loss of generality let it be ) is a type 1 polygon , then any line passing through is a line of symmetry for ( lemma [ lem - sym - type-01 ] ) .so is also a line of symmetry for . *if any of the polygons , and ( without loss of generality let it be ) is a type 2 or type 3 polygon , then has exactly one line of symmetry .hence , . so the result follows . *if any of the polygons , and ( without loss of generality let it be ) is a type 0 polygon with even number of vertices , then any line passing through is a line of symmetry for ( observation [ obs - sym - type-0 ] ) .so is also a line of symmetry for .* if each of , and is a type 0 polygon with odd number of vertices or type 4 polygon then following sub cases are possible : * * if any two polygons ( without loss of generality , let them be and ) have equal number of vertices , then all lines of symmetry of are also the lines of symmetry for and vice - versa . hence , is a line of symmetry for . * * suppose , the number of vertices of one polygon ( say ) is a multiple of the number of vertices of another polygon ( say ) .since , and share a common line of symmetry , every line of symmetry for is also a line of symmetry for .so is a common line of symmetry for all three .+ * * now we consider the case when none of the above is true .let , and be the number of vertices of , and respectively ( figure [ oddprime ] ) . , and are prime to each other ( lemma [ lem - type4 ] ) . passes through a vertex of and a vertex of .first we consider the case when both and lie on the same ray ( ) starting from .suppose , does not pass through any vertex of .let be the vertex of which is closest to in the clockwise direction . makes an angle with at .we label the vertices of in the clockwise direction starting from and denote them by .we label the vertices of in the clockwise direction starting from and denote them by .we label the vertices of in the clockwise direction starting from and denote them by .suppose , passes through the vertex of and the vertex of . passes through the vertex of and of . and make angles and respectively with . and make angles and respectively with .+ since , , we get the following equation : since , , + substituting equation from equation , + + since is an integer , must be an integer . and . . since , , and are pairwise relatively prime , . from equation , .hence , is the line of symmetry for , and .let us consider the case when and lie at different sides of ( fig [ oddprime1 ] ) .the equations and will now change . since, , we get the following equation : since , , substituting equation from equation , from the argument stated previously , the value of is a fraction . if , then it has a fractional part .so , is not a non - zero integer .since , is an integer , it must be zero . from equation , .hence , is a common line of symmetry for , and .similarly the result follows for other cases such as when and lie on different sides of or and lie on different sides of .[ theorem - sym - pair - symg ] every pair in is a symmetric pair , if and only if there is a common line of symmetry for all the polygons in . _proof : _ * if : * trivial . * only if : * we prove this by induction on the number of polygons in .if contains three polygons then the result follows from lemma [ lem-3-pair - sym ] .suppose , the statement is true for polygons .now the polygon say is introduced .if has even number of vertices then any line of symmetry for is a line of symmetry for .hence the result is true .suppose has odd number of vertices . is a symmetric pair ( given ) . since has a common line of symmetry ( induction hypothesis ) , that same line is also a common line of symmetry for .similarly , using the induction hypothesis is also a symmetric pair . hence following lemma[ lem-3-pair - sym ] , , and have a common line of symmetry , say . following theorem [ theorem - los - g1n ] , is a line of symmetry for .thus the result follows .[ obs1-gequig ] if is symmetric then every pair in is a symmetric pair .[ gequig ] is symmetric if and only if is symmetric . _proof : _ follows from theorem [ theorem - los - g1n ] .let be a set of points on the 2d plane .we first identify the sec for . with respect to the center of the secwe divide the points of into a set of concentric polygons .[ psym - gsym ] is in iff is in . __ proof:__*if : * if is in , all polygons in have a common line of symmetry ( definition [ def - sym - calg ] ) .hence , the vertices in have a line of symmetry .the set of vertices in , which is , is in . * only if : * let be a line of symmetry for . also passes through the center of sec of ( observation [ lemma - los - passes - centersecp ] ) .however , is the set of vertices in . also is the line of symmetry of the set of vertices in .it is easy to note that the mirror image of any vertex of , is a vertex of .this implies that , is the line of symmetry for all .therefore all , are symmetric across . hence , is in ( definition [ def - sym - calg ] ) . is in iff is in .[ pasym - pord ] is orderable if and only if is in ._ proof : _ * if : * if is in , is in ( theorem [ psym - gsym ] ) . if is in , there is no common line of symmetry for the polygons in ( definition [ def - sym - calg ] ) .there exists an asymmetric pair in .following lemma [ lem - all - pair - asym - ord ] , is orderable .hence is orderable . * only if : * if is orderable , is in ( lemma 1 ) .[ cor - pasym - pord ] is orderable if and only if is in .let be a set of robots . from the previous section, we note that if is orderable , then leader election is possible from . in this casethe first robot in the ordering becomes the leader . in this section ,we present the leader election algorithm for a set of robots , . can be viewed as a set of multiple concentric polygons as described previously .the leader election algorithm elects leader from .first we present leader election algorithm for a single polygon .then we extend the algorithm for the set of concentric polygons .a polygon may have more than one line of symmetry .the number of lines of symmetry of is called the degree of symmetry of .algorithms and are used for checking symmetry in when the numbers of vertices of are odd and even respectively . _ correctness of check_symmetry_odd(g ) : _ follows from theorem [ obs - asymmetric - polygon ] . _ correctness of check_symmetry_even(g ) : _ follows from theorem [ obs - asymmetric - polygon ] .note that if has even number of vertices then every line of symmetry ( straight or skewed ) passes through 2 vertices and is counted twice .therefore , the algorithm returns the by dividing it by .next we present a leader election algorithm , when the convex polygon has degree of symmetry one ._ correctness of elect_leader_sym(g ) : _ follows from theorem [ obs - asymmetric - polygon ]. [ 1sym_le ] if the degree of symmetry of a convex symmetric polygon is one and the line of symmetry passes through two vertices of the polygon , then leader election is possible . once the leader is elected for a symmetric polygon with degree of symmetry one , the leader can be moved such a way that the new polygon becomes asymmetric .following algorithm does this task . = move , distance , to its right side , on the circumference of the circle inscribing return _ correctness of make_symtoasym(g ) : _ whenever the leader moves from its position , becomes asymmetric .then no other robot executes the algorithm .therefore , there is no chance that becomes symmetric again .following algorithm elects leader when is asymmetric . = lexicographic minimum of = return _ correctness of elect_leader_asym(g ) : _ follows from ( lemma [ lem - asym - ord ] ) .let be a set of concentric polygons as described previously . is the nearest to the center . is the farthest from the center .algorithm selects a leader from .the algorithm assumes that is asymmetric .therefore , is an asymmetric pair ( observation [ obs - pair - asym - ord ] ) . checks if is asymmetric .if is asymmetric , then the leader is selected from by algorithm . if is symmetric , then computes . since , is asymmetric , is orderable .leader election is possible from . _correctness of electleader( ) : _ follows from lemma [ lem - one - pair - asym - ord ] .first is reconstructed to form . first is preprocessed to generate .algorithm selects a leader from , assuming that in is asymmetric .the algorithm pre - processes to make asymmetric , whenever that is possible .let in be symmetric .if any in is asymmetric , then is asymmetric ( lemma [ lem - one - pair - asym - ord ] ) . if there is no asymmetric polygon in , but their exists a such that contains only one vertex , and the vertex is on the line of symmetry of , then is moved slightly ( distance ) form its current position . becomes asymmetric . is computed .the first vertex from , in the ordering of is selected as a leader from .the leader is moved to a new position ( distance away from the current position ) to make asymmetric . finds an asymmetric polygon from using binary search on the set .if can not find any asymmetric polygon then an asymmetric pair is searched using algorithm . if any in is asymmetric , then is asymmetric ( lemma [ lem - all - pair - asym - ord ] ) . in such a case , is computed by radially projecting the polygon on .the first vertex from , in the ordering of is selected as a leader from .the leader is moved to a new position ( distance away from the current position ) to make asymmetric . if no such pair is found , then is symmetric and hence not orderable .the algorithm also maintains the number of vertices of to be more than .this assures that the sec will not change during the motion of the leader from . finally , adjusts to be asymmetric if it is initially symmetric . [ obs - v - move ] suppose there exists no asymmetric polygon and asymmetric pair , but there exists a polygon such that contains a single vertex .in this case , the leader selected from is same no matter whether is in motion or distance apart from its position in symmetry .[ lem - vl - move ] when is moving no other robot will be selected as leader ._ proof : _ is selected either from or .if is selected from , then after starts moving , becomes asymmetric .no robot will execute again .therefore remains the leader .if is selected from , then after starts moving , forms the new where is the only robot in .hence , remains the leader ._ correctness of makeasym( ) : _ the algorithm always maintains the number of vertices of to be greater than three .this ensures that the sec would not change during the motion of any robots from .following lemma [ lem - all - pair - asym - ord ] , lemma [ lem - one - pair - asym - ord ] and observation [ obs - v - move ] , if their is an asymmetric polygon or an asymmetric pair or a polygon with single vertex on the line of symmetry of , then is orderable and leader election is possible . lemma [ lem - vl - move ] assures that when the leader is moving , no other robots will move . in this section ,the algorithm for gathering transparent fat robots is presented .initially all robots are assumed to be stationary and separated .let be a set of such robots .let be the set of robots already in the gathering pattern .initially is null .the algorithm first finds the center of the sec for the robots in .the robots in are ordered based on their distances from .the robots equidistant from are on the circumference of a circle and form a convex polygon .the robots are the vertices of the polygons .hence , we get a number of concentric polygons having a common center . the robots equidistant from , are said to be in the same level .let there be levels of distances .the robots nearest to are at level and the robots farthest from are at level .let be the set of these concentric polygons . is preprocessed to generate using .the gathering algorithm then processes using to make asymmetric .if reports that is in , then gathering is not possible .if returns an orderable with asymmetric , then selects a leader robot using and moves to extend the gathering pattern already formed , using algorithm . moves towards in order to build the desired gathering pattern .if is not occupied by any other robot , then becomes the central robot of the gathering pattern .if is occupied by another robot , then the algorithm finds the last layer of the gathering pattern .if is full , then makes the new layer ( figure .[ gatheringalgof ] ) .if is not full , then may slide around ( if required ) and is placed in layer .when reaches its destination and stops , the robot is removed from and added to . of the __ sec__,width=453,height=170 ] remove from add to calculate by the robots in set . be the center of build the set with the robots in set = the following lemmas shows that the choice of the destination and the robot for movement remain invariant during the motion of the designated robot . remains minimum during its movement , width=188,height=188 ] when a robot is moving towards ( the center of sec ) and/or sliding around , its distance from is still minimum among all the robots in set ._ proof _ : let be a robot , which is selected for moving towards .let ( figure .[ invariant ] ) .no other robot is inside the circle of radius . according to , only starts moving towards and other robots are stationary . moves towards and after touching the robots at last layer , it may start sliding around and stops after getting its final position . during the motion of , it remains inside the circle of radius .therefore , remains at minimum distance from , among all robots in set .after it reaches its final position , it is removed from set and added to set .a new nearest robot in set is selected .no obstacle appears in the path of a mobile robot to its destination . _proof : _ let be a robot which is selected for moving towards . according to , is nearest to .let in figure .[ obstacle ] be the rectangle in which resides during its movement to . is units .let rectangles and be of unit width and length equal to the length of rectangle .it is sufficient to show that , if center of no other robot appears inside the quadrilateral , then the path of towards will be free of obstacles .if the center of any other robot appears inside the region , then that robot will be nearest to .however , has been selected for moving .therefore , is nearest to and no robots appear inside the region . if any robot appears inside the regions or then either the robots touch or is farther from than .as we consider the robots in set are separated , no other robots will touch . according to , is the only robot to be considered for moving .therefore , any robot inside the regions and/or is not considered in the path of to its destination .hence , no obstacle appears in the path of a mobile robot to its destination .in this paper , we have proposed a deterministic distributed algorithm for gathering autonomous , oblivious , homogeneous , asynchronous , transparent fat robots .the robots do not have explicit communication capability or common coordinate system .we assume that initially all robots are stationary and separated . given a set of such robots, we find a destination which remains invariant during the run time of the algorithm .one robot is allowed to move at any point of time .the robot which moves , is selected such a way that , it will be the only eligible robot until it reaches to its destination .therefore , a leader election technique is required to elect a robot for movement from the set of homogeneous robots .we also propose the leader election algorithm in order to select an robot for movement .this paper characterizes all the cases where leader election is not possible . to do so, we make an ordering with a set of arbitrarily positioned robots .if ordering is possible with a set of robots , then leader election is also possible and hence the robots can gather deterministically .the paper characterizes the configurations formed by robots where ordering and hence , gathering is not possible .this configurations are defined as symmetric configurations .the gathering algorithm ensures that multiple mobile robots will gather for all asymmetric configurations .ando h. , oasa y. , suzuki i. , yamashita m. : a distributed memoryless point convergence algorithm for mobile robots with limited visibility , ieee transaction on robotics and automation , 15(5 ) 1999 , pp .818 - 828 .dieudonne y. , petit f. , villain v. : brief announcement : leader election vs. pattern formation , podc 2010 , pp .404 - 405 .efrima a. , peleg d. : distributed models and algorithms for mobile robot systems , software seminar , 2007 , lncs 4362 , pp .70 87 .flocchini p. , prencipe g. , santoro n. , widmayer p. : gathering of autonomous mobile robots with limited visibility , annual symposium on theoretical aspects of computer science , 2001 , lncs 2010 , pp .247 - 258 .gan chaudhuri s. , mukhopadhyaya k. : gathering asynchronous transparent fat robots , proc .international conference of distributed computing and internet technology , lncs 5966 , 2010 , pp .170 - 175 . katayama y. , tomida y. , imazu h. , inuzuka n. , wada k. : dynamic compass models and gathering algorithms for autonomous mobile robots , international colloquium on structural information and communication complexity , 2007 , lncs 4474 , pp .274 - 288 . | _ gathering _ of autonomous mobile robots is a well known and challenging research problem for a system of multiple mobile robots . most of the existing works consider the robots to be dimensionless or points . algorithm for gathering _ fat robots _ ( a robot represented as a unit disc ) has been reported for _ three _ and _ four _ robots . this paper proposes a distributed algorithm which deterministically gathers asynchronous , fat robots . the robots are assumed to be transparent and they have full visibility . since the robots are unit discs , they can not overlap ; we define a _ gathering pattern _ for them . the robots are initially considered to be stationary . a robot is visible in its motion . the robots do not store past actions . they are anonymous and can not be distinguished by their appearances and do not have common coordinate system or chirality . the robots do not communicate through message passing . in the proposed _ gathering _ algorithm one robot moves at a time towards its destination . the robot which moves , is selected in such a way that , it will be the only robot eligible to move , until it reaches its destination . in case of a tie , this paper proposes a _ leader election _ algorithm which produces an ordering of the robots and the first robot in the ordering becomes the leader . the ordering is unique in the sense that , each robot , characterized by its location , agrees on the same ordering . we show that if a set of robots can be ordered then they can gather deterministically . the paper also characterizes the cases , where ordering is not possible . this paper also presents an important fact that , if leader election is possible then _ gathering pattern _ formation is possible even with no chirality . it is known that , pattern formation and leader election are equivalent , for robots when the robots agree on chirality . this paper proposes an algorithm for formation of the _ gathering pattern _ for transparent fat robots without chirality . asynchronous , fat robots , gathering , leader election , chirality . |
fundamental processes , such as transcription and replication require a transient , local opening of the dna double helix .bubbles _ occur spontaneously under physiological conditions , while complete melting and the separation of the two complementary strands requires temperatures around .bubbles have been implicated as an explanation for high cyclization rates in short dna fragments , but their main interest lies in biology and in the physical mechanism underlying the functioning and control of transcription and replication start sites : the stability of dna is sequence - dependent and opening is strongly influenced by superhelical stress and the binding of regulatory proteins .in particular , benham and coworkers showed significant correlations between the positions of strongly stress - induced destabilized regions and regulatory sites .+ several models exist to describe the internal opening of dna .the peyrard - bishop - dauxois and the poland - scheraga ( ps ) models have already been used to quantify bubble statistics and for the _ ab initio _ annotation of genomes on the basis of correlations between biological function and thermal melting . here, we describe the bubble statistics of random and biological sequences at physiological conditions using ( i ) the zimm - bragg ( zb ) model , an efficient approximation of the ps model , and ( ii ) the benham model , a generalization of the zb model accounting for superhelical density but neglecting writhe . after exploring the bubble statistics of unconstrained dna using the zb model, we introduce an asymptotically exact self - consistent linearization of the benham model as a precise and convenient tool to study the huge impact of superhelicity on local bubble opening .the numerical efficiency of the method allows us to investigate the bubble statistics for entire genomes and random sequences of sufficient length ( bp ) to obtain statistically significant results for sequence effects on bubble statistics and positioning . in a final step , we correlate the positions of highly probable bubbles within the genome of _e. coli _ with the position of transcription start sites ( tss ) and start codons .the most widely - used model to treat the denaturation of dna chains is the ps model which offers predictive power for thermal melting and strand dissociation for dna of arbitrary length , strand concentration and a wide range of ionic conditions . for long heterogeneous sequences ,whose local denaturation is dominated by the quenched sequence disorder , the related and computationally faster zb model gives surprisingly good results . in the ps and zb formalisms ,the free energy of a given configuration is decomposed into formation free energies for closed base - pair steps and free energy penalties for the nucleation of unpaired regions ( or bubbles ) . using periodic boundary conditions , the zb hamiltonian for a circular chain of length can be expressed as where ( ) if base - pair is open ( closed ) . ( ) is the nearest - neighbor ( nn ) free energy to form the base - pair step . ] m .being formulated as a 1d ising model , the model is easily solved analytically ( numerically ) for homogeneous ( heterogeneous ) sequences using transfer matrix methods .the partition function of the system is then given by ,\ ] ] the individual closing probability by .\ ] ] in particular , it is possible to calculate position resolved opening probabilities , , for bubbles containing open bps and beginning at the closed bp , by \ ] ] with the global closing probability and the probability per bp to observe a bubble of length are given by and .basic summing rules on the bubble probabilities imply that , and that , where is the average bubble length and is the probability per bp to observe a bubble of arbitrary length . for homogeneous sequences ,these equations could be solved easily . in the asymptotic limit , it leads to ^n , \label{eq : homoz}\\ \langle\theta\rangle & = & \frac{1}{2}\left[1+\frac{g-1}{\sqrt{(g-1)^2 + 4s}}\right ] , \label{eq : homozb}\\ p_l & = & \frac{1}{2}\left[\frac{2}{1+g+\sqrt{(g-1)^2 + 4s}}\right]^{l+2 } \nonumber \\ & & \times\left[\frac{2s+g\left(g-1+\sqrt{(g-1)^2 + 4s}\right)}{\sqrt{(g-1)^2 + 4s}}\right ] \label{eq : homop } \end{aligned}\ ] ] with ] .the bubble probability is therefore a decreasing exponential function of for homogeneous sequences . for heterogeneous sequences, we numerically compute the different desired observables in a -algorithm . in a first step ,we iteratively compute the forward and backward products , and .the second step consists in applying equations [ eq : zbz ] , [ eq : zbthet ] and [ eq : zbpb ] . in living organisms, dna is highly topologically constrained into circular domains ( closed - loops or circular molecules ) .each closed domain is defined by a topological invariant , the so - called linking number . represents the algebraic number of turns either strand of the dna makes around the other .it can be decomposed in two contributions : the twist which is the number of turns of the double - helix around its central axis , and the writhe which is the number of coils of the double - helix . in the majority of living systems ,the average linking number is below the characteristic linking number value of the corresponding unconstrained linear dna , due to a negative superhelical density imposed by protein machineries , where is the linking number difference . per bp as a function of the relative opening in the asymptotic limit , for different stress values : ( squares ) , -0.03 ( stars ) , -0.06 ( circles ) and -0.1 ( crosses).,width=321 ] in this article , we consider bubble openings in superhelically constrained circular dna using the benham model for an imposed superhelical density , where the standard thermodynamic description of base - pairing ( ) is coupled with torsional stress energetics . for each state ,if one neglects the writhe contribution , the imposed linking difference can be decomposed in three contributions : 1 ) the denaturation of base - pairs relaxes the helicity by where bp / turn is the number of base - pair in a helical turn ; 2 ) the resulting single - strand regions can twist around each other inducing a global over - twist of ; 3 ) then , the bending and twisting of double - stranded parts is put in the residual linking number . therefore , due to the topological invariance of , we get the closure relation for denatured regions , the high flexibility of single - stranded dna allows unpaired strands to interwind . the energy associated with the helical twist ( in rad / bp ) of open base - pair is where the torsional stiffness is known from experiments , .the individual twist are related to the global over - twist via the relation . forpaired helical regions , it has been experimentally found that superhelical deformations induce an elastic energy , quadratic in the residual linking difference where is the number of open base - pairs and . by integrating over the continuous degrees of freedom , the superhelical stress energeticsis represented by a non - linear effective hamiltonian : ^ 2 \nonumber \\ & & -\frac{1}{2\beta } \log\left(\left[\frac{2\pi } { \beta c}\right]^{n_o}\frac{4\pi^2 c}{4\pi^2 c+k n_o}\right ) \label{eq : qo } \end{aligned}\ ] ] is minimal for a non - zero number of opening base - pairs which increases as the stress strength increases ( see figure [ fig : qeff ] ) .+ the total effective hamiltonian is given by .we fix .the benham hamiltonian is defined in a superhelical density ( )-imposed ensemble defined by equation [ eq : she ] . in this section ,we briefly discuss similar model but in the torque - imposed ensemble .+ in this ensemble , a linear dna segment ( length ) is constraint by a weak torque applied on base , the first bp being fixed . for each bp , we define its orientation in the plane perpendicular to the average axis of the double - helix and oriented in the 5 to 3 direction ( for a denatured bp - step , of the benham model ) .the total hamiltonian of the system is then given by with and the natural helical twist .writing , and integrating over the , leads to the effective hamiltonian ( relatively to the situation without torque ) : \sum_i \theta_i\ ] ] applying the constant torque is therefore equivalent to adding an external field ] in discrete fourier modes , and by using the transfer - matrix method .benham and coworkers have also developed an approximate method which first involves a windowing procedure to find the minimum free energy and then consider only the states whose energies do not exceed the minimum one by more than a given threshold . at high threshold values or high negative superhelicity ,the computation time for this algorithm scales exponentially with the threshold and the superhelical stress .both schemes are still time demanding for very long sequences . in order to speed up the resolution of the benham model and to access directly to position - dependent opening properties of bps _ and _ bubbles , we develop an efficient variational method allowing us to use the transfer - matrix solution of the zb model . for long sequences , assuming that fluctuations of are small around the value ,we can expand around .the approximated effective hamiltonian then takes the typical zb form : where represents the mean - field of our approximation . if one imposes the superhelical stress ( ie , the torque ) instead of the superhelical density , is related to the effective field generated by the torque ( see above ) .+ in the following , we employ the benham ensemble of an imposed superhelical density , , and determine ( or ) self - consistently .self - consistency requires , and is equivalent to solving because the function is monotonic . for homogeneous sequences ,the general solution of the self - consistency equation [ eq : fh ] can not be computed analytically . however , at low temperatures , weak superhelical densities and in the limit of infinitely long chains , a small perturbation development is valid and leads to analytical expressions for . for infinitely long chains , becomes with , and /(2\beta) ] , we also have in the limit , inserting in eq.[eq : dh ] and solving eq.[eq : fh ] , leads to this expression is valid until ( ) , i.e. )}\right)\equiv \sigma_l\ ] ] for , we could write ( with ). then , and .solution of eq.[eq : fh ] leads to figure [ fig : approx ] shows that the numerical solution of eq.[eq : fh ] agrees very well with the two expressions found above ( eq.[eq : h1 ] and [ eq : h2 ] ) .+ for a homogeneous sequence ( ).,width=321 ] to compute bp or bubbles properties , formulas [ eq : homoz ] , [ eq : homozb ] and [ eq : homop ] are still available if one replaces by and by . for heterogeneous sequences ,we use the bisection method coupled to the newton - raphson method to numerically solve the self - consistency equation , for fixed values of the temperature and of the superhelical density . knowing that an evaluation of the function requires one transfer matrix method computation ( each ), it takes typically evaluations to determine the root with a relative precision of .this allows numerically efficient computation of denaturation profiles .for example , computing the local closing probabilities for the _ e.coli _ genome ( mbps ) takes about seconds on a 2.4 ghz computer with the self - consistent method , whatever the density is . on the same computer, it would take about s with the exact method ( algorithm ) for any values ( interpolation of data given in ref. ) , and about s with the approximate method for ( and around 40 times more for ) . ] ) at different salt concentrations ( \in[0.05,1] ] ( circles ) , 0.1 ( squares ) and 0.3 m ( crosses ) , respectively to the corresponding unconstrained ( zb ) situation .dashed lines represent the fitted relation given in eq.[eq : tmsiggc].,width=321 ] on figure [ fig : salt ] a , we observe that , under the different considered gc - contents and salt concentrations , the melting temperature of random superhelical dna is a quadratic function of , and only the intercept of this function ( ) depends on the gc and on ] ) for the neighborhood of a transcription start site ( bp n 259382 ) in the _ e. coli _ genome , in the presence ( b , c , d ) or in the absence ( a ) of an imposed superhelical density ( b ) , -0.06 ( c ) and -0.10 ( d).,width=321 ] positions of strongly stress - induced destabilized regions have been shown to correlate with many regulatory regions including transcription start sites or origins of replication . in this section , we focus on bubble positioning inside the promoter regions of the bacterium _ e. coli_. + the transcription of dna is initiated by the local opening of the double - helix at transcription start sites .figure [ fig : compa ] illustrates their association with strongly stress - induced destabilized regions .in addition to position - dependent opening probabilities , our approach allows us to calculate the complete bubble free - energy landscape , .figure [ fig : land ] shows the effect of superhelicity on for the neighborhood of the same tss in _e. coli_. the analysis reveals that opening is the result of the meta - stable unwinding of a large bubble and not of enhanced small scale breathing .we note that knowledge of is essential for modeling the dynamics of bubble nucleation and growth . )bubble centers included in the regions [ tss-300,tss+300 ] for the 760 studied tss ( black bars ) or for randomly picked regions ( blue line ) .( c ) probability distribution function of the distance between highly probable bubble centers and the nearest start codons for the 1158 actually found ( black line ) or randomly situated ( blue line ) bubbles.,width=321 ] figure [ fig : tss ] analyses the statistical relation beween tsss and bubbles induced by superhelical stress for the entire genome of _e. coli_. figure [ fig : tss]a shows a significant and non - random destabilization around tsss , with a maximal opening around .the same computation using the zb model shows insignificant and orders - of - magnitude smaller opening probabilities .however , we find a non - random signal around tss-10 corresponding to the position of the at - rich pribnow box , an essential motif to start transcription in bacteria .figure [ fig : tss]b gives the relative positions of highly probable bubbles included in tss neighborhoods .the centers of these bubbles are mainly localized in the [ tss-200,tss ] region where many transcriptional and promoter factors are recruited and bind to dna .conversely , figure [ fig : tss]c confirms that the majority of highly probable bubbles are situated upstream and close to start codons of genes .actually , these bubbles are composed by around of coding bps , significantly lower than the percentage of coding bps in _( ) .we have developed a numerically efficient , self - consistent solution of the benham model of bubble opening in superhelically constrained dna .in particular , we are able to go beyond the calculation of opening probabilities for base pairs and to address the full , position - dependent bubble statistics for entire genomes .our results indicate , that negative supercoiling leads to ( meta- ) stable unwinding of bubbles comprising base - pairs . in heterogeneous sequences, bubbles are strongly localized in at - rich domains with sequence disorder dominating the bubble statistics .as we have shown , large bubbles open with a significantly larger probability in biological sequences , than in random sequences with identical gc - content . in the case of _e. coli _ , the most likely bubbles are located directly upstream from transcription start sites , highlighting the biological importance of this now well understood , physical property of dna .99 c. calladine , h. drew , b. luisi , and a. travers , _ understanding dna ; the molecule and how it works _ ( elsevier academic press , 2004 ) .b. alberts , d. bray , j. lewis , m. raff , k. roberts , and j.d .watson , _ molecular biology of the cell _ ( garland science , 2002 ) .d. kowalski , and m. eddy , proc .usa * 85 * , 9464 ( 1988 ) .d. kowalski , d. natale , and m. eddy , embo j. * 8 * , 4335 ( 1989 ) . m. frank - kamenetskii , biopolymers * 10 * , 2623 ( 1971 )cloutier , and j. widom , mol .cell * 14 * , 355 ( 2004 ) .j. yan , and j.f .marko , phys .* 93 * , 108108 ( 2004 ) .j. santalucia , proc .usa * 95 * , 1460 ( 1998 ) .benham , j. mol .biol . * 225 * , 835 ( 1992 ) .fye , and c.j .benham , phys .e * 59 * , 3408 ( 1999 ) .benham , and r.r.p .singh , phys .lett . * 97 * , 059801 ( 2006 ) .sobell , proc .usa * 82 * , 5328 ( 1985 ) .t. ambjrnsson , and r. metzler , phys. rev .e * 72 * , 030901 ( 2005 ) .h. wang , m. noordewier , and c.j .benham , genome res .* 14 * , 1575 ( 2004 ) .p. ak , and c.j .benham , plos comp .* 1 * , e7 ( 2005 ). h. wang , and c.j .benham , plos comp .biol . * 4 * , e17 ( 2008 ) .m. peyrard , and a.r .bishop , phys .* 62 * , 2755 ( 1989 ) .t. dauxois , m. peyrard , and a.r .bishop , phys .e * 47 * , r44 ( 1993 ) .d. poland , and h.a .scheraga , j. chem . phys . * 47 * , 1456 ( 1966 ) .d. jost , and r. everaers , biophys .j. * 96 * , 1056 ( 2009 ) .g. kalosakas , k.o .rasmussen , a.r .bishop , c.h .choi , and a. usheva , europhys .lett . * 68 * , 127 ( 2004 ) .van erp , s. cuesta - lopez , j .-hagmann , and m. peyrard , phys .. lett . * 95 * , 218104 ( 2005 ) .s. ares , and g. kalosakas , nano lett .* 7 * , 307 ( 2007 ) .y. kafri , d. mukamel , and l. peliti , eur .b * 27 * , 135 ( 2002 ) .r. blossey , and e. carlon , phys .e * 68 * , 061911 ( 2003 ) .t. garel and c. monthus , j. stat .mech . , p06004 ( 2005 ) .b. coluzzi , and e. yeramian , eur . j. phys .b * 56 * , 349 ( 2007 ) . c. monthus , and t. garel , arxiv : cond - mat/0605448v1 ( 2007 ) .e. yeramian , s. bonnefoy , and g. langsley , bioinformatics * 18 * , 1 ( 2002 ) .e. carlon , a. dkhissi , m.l .malki , and r. blossey , phys .e * 76 * , 051916 ( 2007 ) .d. jost , and r. everaers , j. phys . : condens .matter * 21 * , 034108 ( 2009 ) .vologodskii , a.v .lukashin , v.v . , anshelevich , and m. frank - kamenetskii , nucleic acids res . * 6 * , 967 ( 1979 ) .t. garel , h. orland , and e. yeramian , arxiv : q - bio.bm/0407036 ( 2004 ) .benham , and c. bi , j. comput .11 * , 519 ( 2004 ) . c. bi , and c.j .benham , _ proceedings of the 2003 ieee bioinformatics conference _ , ( ieee computer society , 2003 ) . w.h .press , s.a .teukolsky , w.t .vetterling , and b.p .flannery , _ numerical recipes in fortran : the art of scientific programming _( cambridge university press , 1992 ) . c. bi and c.j .benham , bioinformatics * 20 * , 1477 ( 2004 ) .benham s web - server : + http://benham.genomecenter.ucdavis.edu .jeon , j. adamcik , g. dietler , and r. metzler , phys .* 105 * , 208101 ( 2010 ) .benham , phys .e * 53 * , 2984 ( 1996 ) .t. hwa , e. marinari , k. sneppen , and l .- h .tand , proc .usa * 100 * , 4411 ( 2003 ) .t. ambjrnsson , s.k .banik , o. krichevsky , and r. metzler , biophys .j. * 92 * , 2674 ( 2007 ) .s. gama - castro _ et al _ , nucleic acids res . *36 * , d120 ( 2008 ) .pribnow , d. , proc .usa * 72 * , 784 ( 1975 ) . | we present a general framework to study the thermodynamic denaturation of double - stranded dna under superhelical stress . we report calculations of position- and size - dependent opening probabilities for bubbles along the sequence . our results are obtained from transfer - matrix solutions of the zimm - bragg model for unconstrained dna and of a self - consistent linearization of the benham model for superhelical dna . the numerical efficiency of our method allows for the analysis of entire genomes and of random sequences of corresponding length ( base pairs ) . we show that , at physiological conditions , opening in superhelical dna is strongly cooperative with average bubble sizes of base pairs ( bp ) , and orders of magnitude higher than in unconstrained dna . in heterogeneous sequences , the average degree of base - pair opening is self - averaging , while bubble localization and statistics are dominated by sequence disorder . compared to random sequences with identical gc - content , genomic dna has a significantly increased probability to open large bubbles under superhelical stress . these bubbles are frequently located directly upstream of transcription start sites . |
the minority game ( mg ) was originated from the el farol bar problem in game theory first conceived by arthur in 1994 , where a finite population of people try to decide , at the same time , whether to go to the bar on a particular night . since the capacity of the baris limited , it can only accommodate a small fraction of all who are interested . if many people choose to go to the bar , it will be crowded , depriving the people of the fun and thereby defying the purpose of going to the bar . in this case , those who choose to stay home are the winners .however , if many people decide to stay at home then the bar will be empty , so those who choose to go to the bar will have fun and they are the winners . apparently ,no matter what method each person uses to make a decision , the option taken by majority of people is guaranteed to fail and the winners are those that choose the minority one .indeed , it can be proved that , for the el farol bar problem there are mixed strategies and a nash - equilibrium solution does exist , in which the option taken by minority wins .a variant of the problem was subsequently proposed by challet and zhang , named as an mg problem , where a player among an odd number of players chooses one of the two options at each time step .subsequently , the model was studied in a series of works . in physics , mg has received a great deal of attention from the statistical - mechanics community , especially in terms of problems associated with non - equilibrium phase transitions . in the current literature, the setting of mg is that there is a _ single _ resource with players two possible options ( e.g. , in the el farol bar problem there is a single bar and the options of agents are either going to the bar or not ) , and an agent is assumed to react to available global information about the history of the system by taking on an alternative option that is different than the current one it is taking .the outstanding question remains of the nonlinear dynamics of mg with _multiple resources_. the purpose of this paper is to present a class of multi - resource mg models . in particular , we assume a complex system with multiple resources and , at any time , an individual agent has _ resources / strategies _ to choose from .we introduce a parameter , which is the probability that each agent responds based on its available local information by selecting a less crowded resource in an attempt to gain higher payoff .we call the _ minority - preference probability_. we find , strikingly , as is increased , the phenomenon of grouping emerges , where the resources can be distinctly divided into two groups according to the number of their attendees .in addition , the number of stable pairs of groups also increases .we shall show that the grouping phenomenon plays a fundamental role in shaping the fluctuations of the system .the phenomenon will be demonstrated numerically and explained by a comprehensive analytic theory .an application to the analysis of empirical data from a real financial market will also be illustrated , where grouping of stocks ( resources ) appears .our model is not only directly relevant to nonlinear and complex dynamical systems , but also applicable to social and economical systems .our multi - resource mg model is presented in sec .[ sec : model ] .the emergence of grouping phenomenon is demonstrated in sec .[ sec : numerics ] .a general theory is developed in sec .[ sec : theory ] to elucidate the dynamics of the emergence and evolution of the strategy groups .concluding remarks are presented in sec .[ sec : conclusion ] .we consider a complex , evolutionary - game type of dynamical system of interacting agents competing for multiple resources .each agent will chose one resource in each round of the game . and, each resource has a limited capacity , i.e. , the number of agents it can accommodate has an upper bound .there are thus multiple strategies ( , , , , where is the maximum number of resources / strategies ) available to each agent . on average ,each strategy can accommodate agents , and we consider the simple case of .let be the number of agents selecting a particular strategy .if , the corresponding agents win the game and , consequently , is the _ minority strategy_. however , if , the associated resource is too crowded so that the strategy fails and the agents taking it lose the game , which defines the `` majority strategy . ''the optimal solution to the game dynamics is thus . in a real - world system , it is often difficult or even impossible for each agent to gain global information about the dynamical state of the whole system .it is therefore useful to introduce the concept of _ local information network _ in our multiple - resource mg model . at each time step , with probability , namely the _ minority - preference probability _ , each agent acts based on local information that it gains by selecting one of the available strategies . in contrast , with probability , an agent acts without the guidance of any local information . for the minority - preference case , agent has neighbors in the networked system . the required information for to react consists of all its neighbors strategies and , among them , the winners of the game , i.e. , those neighboring agents choosing the minority strategies at the last time step .let be the set of minority strategies for s winning neighbors , where a strategy may appear a number of times , if it has been chosen by different winning neighbors .with probability , agent will chose one strategy randomly from .thus , the probability for strategy to be selected is proportional to the times it appears in , i.e. , , where is the number of elements in and the times strategy appears in .if is empty , will randomly select one from the available strategies . while , for the case that an agent selects a strategy without the guidance of any local information with probability , it will either choose a different strategy randomly from the available ones with mutation probability , or inherit its strategy from the last time step with probability a concrete example to illustrate the strategy - grouping phenomenon , we set . figures [ fig : ts](a - c ) show time series of , the number of agents selecting each strategy , for , 0.45 , and 1.0 , respectively . for fig .[ fig : ts](a ) where , an agent makes no informed decision in that it changes strategy randomly with probability but stays with the original strategy with probability . in this case , s appear random .for the opposite extreme case of [ fig .[ fig : ts](c ) ] , each agent makes well informed decisions based on available local information about the strategies used by its neighbors . in this case , the time series are quasiperiodic ( a detailed analysis will be provided in sec .[ sec : theory ] ) . for the intermediate case of [ fig .[ fig : ts](b ) ] , agents decisions are partially informed . in this case ,an examination of the time series points to the occurrence of an interesting grouping behavior : the 5 strategies , in terms of their selection by the agents , are divided into two distinct groups and that contain and resources , respectively . the time series associated with the smaller group exhibit larger fluctuations about its equilibrium . to better characterize the fluctuating behaviors in the time series , we calculate the variance ^ 2\rangle$ ] as a function of the system parameter , where is the expectation values averaged over a long time interval , as shown in fig .[ fig : variance ] on a logarithmic scale .we observe a generally increasing behavior in with and , strikingly , a bifurcation - like phenomenon .in particular , for , where is the bifurcation point , for all strategies assume approximately the same value . however , for , there are two distinct values for , signifying the aforementioned grouping behavior [ fig .[ fig : ts](b ) ] . from fig .[ fig : variance ] , we also see that , after the bifurcation , the two branches of are linear ( on a logarithmic scale ) and have approximately the same slope , suggesting the following power - law relation : , for , where and are the intercepts of the two lines in fig .[ fig : variance ] .we thus obtain in sec .[ sec : theory ] , we will develop a theory to explain the relations among the variances of the grouped strategies and to provide formulas for the amplitudes of the time series in fig . [fig : ts ] and the sizes of the groups ( denoted by and , respectively ) .specifically , our theory predicts the following ratio between the variances of the two bifurcated branches : which is identical to the numerically observed ratio in eq .( [ eq : main_ratio ] ) , with the additional prediction that the strategies in the group of smaller size exhibit stronger fluctuations since the corresponding value of is larger .overall , the emergence of the grouping behavior in multiple - resource mgs , as exemplified in fig .[ fig : variance ] , resembles a period - doubling like bifurcation .while period - doubling bifurcations are extremely common in nonlinear dynamical systems , to our knowledge , in complex game systems a clear signature of such a bifurcation had not been reported previously .a careful examination of the time - series for various has revealed that the strategy - grouping processes has already taken place prior to the bifurcation point in the variance , but all resulted grouping states are unstable .take as an example the -strategy system in fig .[ fig : ts ] . in principle, there can be two types of pairing groups : and .for any grouping state , the following constraint applies : there are in total ( if is even ) or ( if is odd ) possible grouping states for the system with available strategies .however , the grouping states are not stable for .what happens is that a strategy can remain in one group but only for a finite amount of time before switching to a different group .assume that the sizes of the original two pairing groups are and , respectively .the sizes of the new pair of groups are thus and , as stipulated by eq .( [ eq : constraint ] ) . associated with switching to a different pair of groups , the amplitudes of the time series for each strategy also change . as the bifurcation parameter is increased , the stabilities of different pairs of grouping states also change . at the bifurcation point ,one particular pair of groups becomes stable , such as the grouping state in fig .[ fig : variance ] . the bifurcation - like phenomenon and the emergence of various strategy - grouping states are general for multiple - resource mg game dynamics .for example , fig .[ fig : k16 ] shows as a function of for a system with available strategies .there are in total 8 possible grouping states , ranging from to . as is increased , the grouping states , , , and become stable one after another , as can be seen from the appearance of their corresponding branches in fig .[ fig : k16 ] . the behavior can be understood theoretically through a stability analysis ( sec .[ sec : theory ] ) .another phenomenon revealed by fig .[ fig : k16 ] is the merging of bifurcated branches .for example , as is increased through about 0.8 , the grouping states disappear one after another in the reverse order as they initially appeared .this can also be understood through the stability analysis ( sec .[ sec : theory ] ) .here we develop an analytic theory to understand the emergence , characteristics , and evolutions of the strategy groups . in general , for a multiple - resource mg system of agents , as the parameter increased so that agents become more likely to make informed decision for strategy selection , the available strategies can be divided into pairs of groups .the example in fig .[ fig : ts](b ) presents a case where there are two distinct strategy groups and , which contain and strategies , respectively , where . for fig .[ fig : ts](b ) , we have .the strategies belonging to the same group are selected by approximately the same number of agents , i.e. , the time series for strategies in the same group are nearly identical . during the time evolution , a strategy can switch iteratively from being a minority strategy [ to being a majority one [ .in particular , as shown in the schematic map in fig . [fig : sketchmap ] , for the strategy in group denoted by , and the strategy in group denoted by , if , we will have .in addition , the time series reveals that the average numbers of agents for strategies and , denoted by and ( the blue dash line and red dot line in fig .[ fig : sketchmap ] ) , respectively , are not equal to ( the black solid line in fig .[ fig : sketchmap ] ) .in fact , we have with , and . here, is the absolute value of the difference between and . the number of agents [ or typically fluctuates about the equilibrium [ or with amplitude [ or , as shown in the schematic map in fig . [ fig : sketchmap ] .based on the numerical observations , we can argue that the strategy grouping phenomenon is intimately related to the fluctuations in the time series . assuming the mg system is closed so that the number of agents is a constant , we have thus , for two consecutive time steps , we have , substituting eq .( [ eq : i1 ] ) into eq .( [ eq : i2b ] ) , we have from which we obtain the relations between and , and as we see that the fluctuations of the time series are closely related the grouping of the strategies . from the definition of , we can write the variances of and as using eq .( [ eq : i4 ] ) , we obtain as shown in fig .[ fig : variance ] , the ratio of the variances of group and from the simulation agree very well with eq .( [ eq : i5 ] ) .we develop a mean - field theory to understand the fluctuation patterns of the system . to be concrete , we still treat the case of two distinct groups .consider strategy that belongs to group and assume that is the majority strategy at time , i.e. , .according to the mean - field approximation , at the next time step , the number of agents choosing strategy is \nonumber \\ & + & ( n - n_{s_i}^{(0)})(1-p)m\frac{1}{k},\end{aligned}\ ] ] where here , and together are the number of agents abandoning strategy ( or the flow out of strategy ) .that is , of the agents , agents will act based on local information by selecting a minority strategy different than for the next time step . at the same time , there will be agents acting without local information by choosing randomly one of the other strategies .the quantity represents the flow into from the remaining agents .these agents will mutate randomly to switch their strategies to without any local information .we thus have suppose .then and the other strategies in are the minority strategy , and the agents selecting those strategies win the game at .the time series at time can be written as \nonumber \\ & + & ( n - n_{s_i}^{(1)})[p\frac{1}{k_{g_1}}+(1-p)m\frac{1}{k } ] \nonumber\\ & = & n_{s_i}^{(1)}-(w_4+w_5)+(w_6+w_7 ) , \label{eq : nsi2}\end{aligned}\ ] ] where and stand for the flows out of strategy , while and represent the flows into from those agents on other strategies .we have where is larger than , and all other strategies in group will be the majority strategy again , as at time .the process thus occurs iteratively . from eqs .( [ eq : nsi+ ] ) and ( [ eq : nsi++ ] ) , we can get the number of agents for one given strategy at any time .in particular , denoting , , and , we obtain the iterative dynamics for agents selecting strategy as where and stands for the number of agents at . carrying out the iterative process in eq .( [ eq : nsib ] ) , we obtain for the case where exhibits stable oscillations , i.e. , the system is in a _ stationary state _ , we have , we thus obtain the values of and as a function of the probability , mutation probability , and the grouping parameter , and : from , we can get the expression of the amplitude of the fluctuation , the mean value , and its difference from as , /2=\frac{\gamma_{g_1}}{2(1+\alpha ) } , \\\nonumber & & \langle{n_{s_i}}\rangle=[n_{s_i}^{(2a)}+n_{s_i}^{(2a+1)}]/2 = \frac{2\beta+\gamma_{g_1}}{2(1-\alpha ) } , \\\nonumber & & \delta n_{s_i}=|\langle{n_{s_i}}\rangle - n / k|=|\frac{k\gamma_{g_1}-2np}{2k(1-\alpha)}| = \frac{np|k / k_{g_1}-2|}{2k(1-\alpha)}.\end{aligned}\ ] ] in the above derivation , we have assumed that belongs to group , and obtained expressions of the , , , and .similarly , using eq .( [ eq : conv ] ) , we can calculate the time series , the number of agents selecting the strategy belonging to group , and the corresponding characterizing quantities . alternatively , following the steps similar to those from eqs .( [ eq : nsi ] ) to ( [ eq : nsib ] ) , we can write the recurrence formula for as where . for the case of stable strategy, we get the corresponding number of agents as we find that the values of obtained from eq .( [ eq : nsj2a+1 ] ) agree with those from eq .( [ eq : conv ] ) very well ( derivation and data not shown ) .in addition , the expressions of , , and are identical to those associated with , with the quantity replaced by .the mean - field theory is ideally suited for fully connected networks .indeed , results from the theory and direct simulations agree with each other very well , as shown in fig . [fig : mf ] .however , in real - world situations , a fully connected topology can not be expected , and the mean - field treatment will no longer be accurate .for example , we have carried out simulations on square - lattice systems and found noticeable deviations from the mean - field prediction . to remedy this deficiency ,we develop a modified mean - field analysis for mg dynamics on sparsely homogeneous networks ( e.g. , square lattices or random networks ) . due to the limited number of links in a typical large - scale network , it is possible for a failed agent to be surrounded by agents from the same group ( who will likewise fail the game ) . in this case, the failed agent has no minority strategy to imitate ( set is empty ) and thus will randomly select one strategy from the available strategies . taking this effect into account, we can modify the mean - field approximation in eqs .( [ eq : nsi ] ) and ( [ eq : nsi2 ] ) as with the two modified terms given by where is the probability for one agent in group to be surrounded by agents from the same group .the quantity stands for the flow from the failed agents in group [ the number is , who react to the information [ the number is but with no winner surrounded to supply the optional minority strategy [ the number is , and thus select with probability .the quantity represent two factors : ( 1 ) the failed agents in group who should have flow into [ i.e. ] but are held back because they are surrounded by agents in , and ( 2 ) the failed agents in group who are surrounded by agents in and thus select with probability , the number of which is .apparently , we have . from eq .( [ eq : mf22 ] ) , we obtain where the parameters are .\nonumber\end{aligned}\ ] ] the equation set ( [ eq : mfn ] ) represents the modified mean - field description of the time series associated with the stable strategies in the game system supported on sparsely homogeneous networks .the density of agents in is denoted by . for the case where agents from different groups are well mixed in the network ,the probability for one given agent in to meet with agents in is ( for ) .if the average degree of the network is , the probability that one agent from is surrounded by agents from the same group is . from the simulation on the square lattice system where each agent has neighbors, we observe a quite weak effect of clustering of agents in or , so .based on the quantity , analytic prediction of the time series in the modified mean - field ( mmf ) theory can be obtained , as shown in fig .[ fig : mf ] for a square - lattice system .we observe a good agreement with simulation results .simulations on homogeneous networks of different values have also been carried out , with results in good agreement with the prediction from the modified mean - field theory . our mean - field treatment yields formula characterizing the stable oscillations associated with the grouping state , which include the variance ratio of the pairing groups [ eq . ( [ eq : i5 ] ) ] and time series [ eqs . ( [ eq : nsi2a+1 ] ) and ( [ eq : mfn ] ) ] .however , from the simulation result shown in fig .[ fig : k16 ] , we see that not all the grouping states are stable in the parameter space . as is increased , the strategy - grouping state of the smaller becomes stable , and the corresponding branch appears .it is therefore useful to analyze the stability of the grouping state . in our treatmentwe have assumed and .then , the necessary condition for the grouping state to become stable is and . using eqs .( [ eq : nsi2a+1 ] ) and ( [ eq : nsj2a+1 ] ) , we get where and are continuous functions of the parameters and , and . the two inequalities in eq .( [ eq : stability_1 ] ) are nevertheless equivalent to each other .figure [ fig : pm ] presents a phase diagram in the parameter space , where the curves of for , , , are shown .the necessary condition for the strategy - grouping state with to be stable is that the the parameters and are in the upper - right region of the curve .for certain value of , only when will the state of be stable . while the value of from simulation is different from the theoretical value , our mean - field theory does provide a qualitative explanation for the phenomenon in fig .[ fig : k16](a ) , where more branches of smaller strategy - grouping states become stable as is increased .we have also seen in fig .[ fig : variance ] , and fig .[ fig : k16](a ) that , as approaches , the bifurcated branches of different grouping state merge together . for the case of even , fluctuates stably in the grouping state with .while , for the case of odd , fluctuates quasiperiodicly [ see fig .[ fig : ts](c ) ]. actually , the grouping state always switches between and , with .we can understand the instability and merging of grouping states from fig .[ fig : ts](c ) , and the schematic map in fig .[ fig : sketchmap ] , as follows . as increased to , and increase and become comparable to the amplitudes and , respectively .namely , the attendances of strategies can be very close to . in case that of one strategy does not get cross because of _ noise _ , i.e. , it acts as minority ( or majority ) strategy twice , then , the fluctuation of , as well as the grouping state is changed .the financial market is a representative multi - resource complex system , in which many stocks are available for investment .we analyze the fluctuation of the stock price from the empirical data of 27 stocks in the shanghai stock market s steel plate between 2007 and 2010 .we regard the stocks , which are derived from the iron and steel industry , as constituting a mg system with resources , where the agents selecting the resources correspond to the capitals invested .this system is open in the sense that capital typically flows in and out , which is the main difference from our closed - system model . in particular , given the time series of the daily closing price of stock , the daily log - return is defined as .the average return of the stocks at time , denoted by , signifies a global trend of the system at , which is caused by the change in the total mount of the capital in this open , 27-stock system .however , when we analyze the detrended log - returns , the system resembles a closed system , as our model mg system .we shall demonstrate that the strategy - grouping phenomenon occurs in this real - world system .we calculate the pearson parameter of each pair of the detrended log - returns and , which leads to a correlation matrix , as shown in fig .[ fig : matrix](a ) . in terms of the eigenvector associated with the maximum eigenvalue of matrix ,we rank the order of the stocks and obtain the matrix , as shown in fig .[ fig : matrix](b ) . _the striking behavior is that the matrix is apparently divided into 4 blocks , a manifestation of the grouping phenomenon . _ in particular , the matrix elements among the first 15 stocks and those among the remaining 12 stocks are generally positive , but the cross elements between stocks in the two groups are negative .it is thus quite natural to classify the first 15 stocks as belonging to group and the remaining 12 to group .we can then write the matrix in a block form as where the elements of and are positive , and those of and are negative .the phenomenon is that the -stock system has self - organized itself into a grouping state , which is a natural consequence of the mg dynamics in multi - resource complex systems . for one given stock ,the mean absolute correlation is this parameter reflects the weight of the stock in the system . if , oscillations of stock are contained in the noise floor . in this case , there is no indication as to whether this stock belongs to group or .the larger the value of , the less ambiguous that the stock belongs to either one of the two groups . from the value of ranked in the same order as in , we can see that the boundary of the two groups is the stock with minimum .thus can be considered as the characteristic number to distinguish different groups .we can also reorder the matrix according to within group and , respectively .this leads to the matrix , as shown in fig .[ fig : matrix](c ) , further demonstrating the grouping phenomenon .minority game , since its invention about two decades ago , has become a paradigm to study the social and economical phenomena where a large number of agents attempt to make simultaneous decision by choosing one of the available options . in the most commonly studied case of a single available resource with players two possible options ,agents taking the minority option are the guaranteed winners .various minority game dynamics have also received attention from the physics community due to their high relevance to a number of phenomena in statistical physics .it has become more and more common in the modern world that multiple resources are available for various social and economical systems .if the rule still holds that the winning options are minority ones , the questions that naturally arise are what type of collective behaviors can emerge and how they would evolve in the underlying complex system .our present work aims to address these questions computationally and analytically .the main contribution and findings of this paper are the following .firstly , we have constructed a class of spatially extended systems in which any agent interacts with a finite but fixed number of neighbors and can choose either to follow the minority strategy based on information about the neighboring states or to select one randomly from a set of available strategies .the probability to follow the local minority strategy , or the probability of minority preference , is a key parameter determining the dynamics of the underlying complex system .secondly , we have carried out extensive numerical simulations and discovered the emergence of a striking collective behavior : as the minority - preference probability is increased through a critical value , the set of available strategies / resources spontaneously break into pairs of groups , where the strategies in the same group are associated with a specific fluctuating behavior of attendance .this phenomenon of strategy - grouping is completely self - organized , which we conjecture is the hallmark of mg dynamics with multiple resources .thirdly , we have developed a mean - field theory to explain and predict the emergence and evolution of the strategy - grouping states , with good agreement with the numerics .fourthly , we have examined a real - world system of a relatively small - scale stock - trading system , and found unequivocal evidence of the grouping phenomenon .our results suggest grouping of resources as a fundamental type of collective dynamics in multiple - resource mg systems. other real - world systems for which our model is applicable include , e.g. , hedge - fund portfolios in financial systems , routing issues in computer networks and urban traffic systems .we expect our model and findings to be not only relevant with statistical - physical systems , but also important to a host of social , economical , and political systems .additionally , the double - grouping ( or paired grouping ) period - doubling like bifurcation are not the exclusive mode for the grouping of resources .we have also observed the period double - grouping [ see fig . [fig : mapping](a)(b ) ] , and the period triplet - grouping bifurcation phenomena [ see fig . [fig : mapping](c ) ] .further research about these phenomena are of great interest and is valuable for the understanding of chaotic and bifurcation in social systems .zgh thanks prof . matteo marsili for helpful discussions .this work was partially supported by the nsf of china ( grant nos . 10905026 , 10905027 , 11005053 , and 11135001 ) , srfdp no .20090211120030 , frfcu no .lzujbky-2010 - 73 , lanzhou dstp no .2010 - 1 - 129 .ycl was supported by afosr under grant no .fa9550 - 10 - 1 - 0083 .m. anghel , z. toroczkai , k. e. bassler and g. korniss , _ competition - driven network dynamics : emergence of a scale - free leadership structure and collective efficiency _ , phys .lett . * 92 * , 058701 ( 2004 ) . | the minority game ( mg ) has become a paradigm to probe complex social and economical phenomena where adaptive agents compete for a limited resource , and it finds applications in statistical and nonlinear physics as well . in the traditional mg model , agents are assumed to have access to global information about the past history of the underlying system , and they react by choosing one of the two available options associated with a single resource . complex systems arising in a modern society , however , can possess many resources so that the number of available strategies / resources can be multiple . we propose a class of models to investigate mg dynamics with multiple strategies . in particular , in such a system , at any time an agent can either choose a minority strategy ( say with probability ) based on available local information or simply choose a strategy randomly ( with probability ) . the parameter thus defines the _ minority - preference probability _ , which is key to the dynamics of the underlying system . a striking finding is the emergence of strategy - grouping states where a particular number of agents choose a particular subset of strategies . we develop an analytic theory based on the mean - field framework to understand the `` bifurcation '' to the grouping states and their evolution . the grouping phenomenon has also been revealed in a real - world example of the subsystem of stocks in the shanghai stock market s steel plate . our work demonstrates that complex systems following the mg rules can spontaneously self - organize themselves into certain divided states , and our model represents a basic mathematical framework to address this kind of phenomena in social , economical , and even political systems . |
external biasing forces are often applied to enhance sampling in regions of phase space which would otherwise be rarely observed .these perturbing forces are applied in both time - dependent , as in single - molecule pulling and steered molecular dynamics experiments , or time - indepedent forms , as in umbrella sampling simulations .usually , the direction of bias is chosen to follow the reaction coordinate for an interesting process , such as a large conformational change in a biological macromolecule . with judicious reweighing of biased ensemble properties ,previous workers have reconstructed the potential of mean force ( pmf ) along this bias coordinate . the ability to reconstruct pmfs in multiple dimensions and along multiple alternative degrees of freedomwould significantly expand the data analysis tool kit for biased experiments .although the appropriate choice of reaction coordinate is fundamental to kinetic calculations , transition state theory , and the accurate separation of distinct microstates , its selection is limited by prior knowledge of the system. it would be useful to compute pmfs along alternate potential reaction coordinates , in one or several dimensions , after biased data is collected along an initial , possibly arbitary choice .in addition , by using multidimensional pmfs , the mutual entropy can be used quantify and compare correlations between modes .here , i develop methods to perform these calculations . they are applicable to both computer simulations , in which system properties are completely known , or laboratory experiments in which several properties can be simultaneously measured .the pmf , , along a single coordinate , , is defined as } = \frac { \int \delta [ x - x(\bm{r } ) ] e^ { - \beta \ { h_0(\bm{r } ) \ } } \ , d\bm{r } } { \int e^ { - \beta \ { h_0(\bm{r ' } ) \ } } \ , d\bm{r } ' } \label{eq : pmf_1d}\end{aligned}\ ] ] where is an arbitary constant , is , and is the unperturbed system energy as a function of the phase space position . in two dimensions , becomes and ] .generalizing to multiple dimensions , becomes and the delta function is replaced by \prod_{i=1}^{n } \delta [ y_i - y_i(\bm{r})] ] , where ] .they then multiply equation [ eq : phase_space_fe ] by ] , with set to . a periodic biasing program , , which has one period per trajectory , was used . accumulated work was numerically integrated by (b_j - b_{j-1}) ] .when and are completely independent , the nave estimate for is as good as the estimate from equation [ eq : time_slice ] ( fig .[ fig : p100]a ) .if and are highly correlated , however , the nave estimate fails and a wham - informed reconstruction method is necessary ( fig . [fig : u0 ] ) .there is no simple dependence , however , between error and mutual entropy ( fig .[ fig : error ] ) , as other factors of the free energy surface come into play .root mean square error for pmf estimates as a function of mutual entropy .reconstruction errors for pulling experiments are indicated by squares and for umbrella sampling by circles .errors from unbiased diffusion on the surface are indicated by diamonds .errors in wham - reconstructed surfaces are connected by solid lines , while the nave estimate errors are not connected . ] for example , due to the highly diffusive nature of the dynamics and the large timestep necessary for fast sampling , sampling is erroneously improved near the boundaries of small low - energy wells .this leads to systematic free energy underestimates in all simulations , even those without any external bias ( i.e. fig .[ fig : d75 ] ) .although there are errors in wham - informed reconstructions from simulations with external bias , they are no worse than errors from long simulations with free diffusion ( fig . [fig : error ] ) . on certain free energy surfaces , phase space sampling along the dimension may be limited by trapping in local minima .this is especially true if and are relatively independent and biasing along does not enhance conformational sampling along .sampling is generally limited in high - energy regions of phase space ( see figs . [fig : u0 ] , [ fig : p30 ] , [ fig : d75 ] , and [ fig : p100 ] ) . in these regions ,the most accurate two - dimensional pmf reconstruction would require biasing along both and , which can be accounted for by a simple modification of equation [ eq : time_slice ] . in general , accurate pmf reconstruction in higher dimensionsmay require biasing along coordinates that closely correspond to these dimensions .close correspondence may be indicated by high mutual entropy ; this aspect of analysis will be left to further study .root mean square error for pmf estimates as a function of number of simulations , .error bars indicate the mean and standard deviations of rmse from five reconstructions using simulations .reconstruction errors for pulling experiments are indicated by squares and for umbrella sampling by circles .unbiased simulations were run for 10 times as long and are therefore spaced 10 times farther apart on the x axis . ] on the toy two - dimensional surfaces studied , pmf reconstructions along using equation [ eq : time_slice ] actually converge more quickly than equilibrium umbrella sampling and reconstruction from unbiased experiments ( fig . [fig : convergence ] ) .while time - dependent biasing experiments will sample the whole span of the bias coordinate _ de facto _ , umbrella sampling will inherently require a certain number of simulations at different bias centers to attain this broad range .similarly , unbiased simulations will need to be run for longer in order to reach more distal regions of phase space . generally , it is preferable to run a greater number of nonequilibrium experiments than a single long equilibrium experiment because the outcome of each individual trajectory can be highly dependent on initial conditions .these sampling problems described on two - dimensional surfaces are likely to be enhanced in systems with higher complexity .the main caveat with this method , as with all methods based on the nonequilibrium work relation , is that with greater free energy differences , the likelihood of observing a trajectories with negative dissipative work ( work that is less than the free energy difference ) is increasingly unlikely .pulling experiments are poised to find broader utility in studies of nanoscale systems , particularly when free energy differences between states are not prohibitively large .i hope this work will find wide application in computational and laboratory experiments and stimulate further research in the area .matlab source code to perform the described analysis is available at http://mccammon.ucsd.edu/~dminh / software/.i would like to thank r. amaro , c.e .chang , j. gullingsrud , and j.a .mccammon for helpful discussions .i am funded by the nih molecular biophysics training grant at ucsd .mccammon group resources are supported by the nsf ( mcb-0506593 ) , nih ( gm31749 ) , hhmi , ctbp , nbcr , w.m .keck foundation , and accelrys , inc . | external biasing forces are often applied to enhance sampling in regions of phase space which would otherwise be rarely observed . while the typical goal of these experiments is to calculate the potential of mean force ( pmf ) along the biasing coordinate , here i present a method to construct pmfs in multiple dimensions and along arbitary alternative degrees of freedom . a protocol for multidimensional pmf reconstruction from nonequilibrium single - molecule pulling experiments is introduced and tested on a series of two dimensional potential surfaces with varying levels of correlation . reconstruction accuracy and convergence from several methods - this new protocol , equilibrium umbrella sampling , and free diffusion - are compared , and nonequilibrium pulling is found to be the most efficient . to facilitate the use of this method , the source code for this analysis is made freely available . department of chemistry and biochemistry , center for theoretical biological physics , and howard hughes medical institute , university of california at san diego , san diego , california 92093 |
near - field wireless power transfer ( wpt ) has drawn significant interests recently due to its high efficiency for delivering power to electric loads without the need for any wire .inductive coupling ( ic ) is the conventional method to realize near - field wpt for short - range applications typically in a couple of centimeters .the wireless power consortium ( wpc ) that developed the `` qi '' standard is the main industrial organization for commercializing wireless charging based on ic .recently , magnetic resonant coupling ( mrc ) has been applied to significantly enhance the transfer efficiency as well as range for wpt compared to ic , thus opening a broader avenue for practical applications . in mrcenabled wpt ( mrc - wpt ) , compensators each being a capacitor of variable capacity are embedded in the electric circuits of the power transmitter and receiver to tune their oscillating frequencies to be the same as the frequency of the input voltage / current source to achieve resonance . alternatively , resonators each of which constitutes a simple rlc circuit resonating at the source frequency can be deployed in close proximity of the coils of the off - resonance power transmitter and receiver to help efficiently transfer power between them . with mrc , the total reactive power consumption in the system is effectively minimized due to resonance and thus high power transfer efficiency is achieved over longer distance than ic .the preliminary experiments in show that mrc enables a single transmitter to transfer watts of power wirelessly with efficiency to a single receiver at a distance of about meters . formed after the merging between alliance for wireless power ( aw4p ) that developed the `` rezence '' specification and power matters alliance ( pma ) , airfuel alliance is the main industrial organization for promoting wireless charging based on mrc .the rezence specification advocates a superior charging range , the capability of charging multiple devices at the same time , and the use of two - way communication via e.g. bluetooth between the charger unit and devices for real - time charging control .these features make rezence and its future extensions a promising technology for wireless charging systems . in the current rezence specification , one transmitter with a single coilis used in the power transmitting unit , i.e. , only the single - input multiple - output ( simo ) mrc - wpt is considered , as shown in fig .[ fig : wpttable](a ) .although this centralized wpt system performs well when the receivers are all sufficiently close to as well as perfectly aligned with the transmitter , the power delivered to a receiver decays significantly as it moves more distant away from the transmitter or misaligns with its orientation .this thus motivates distributed wpt where the single centralized transmitter coil is divided into multiple coils each with smaller size ( radius ) and these coils are placed in different locations to cover a given region , as shown in fig .[ fig : wpttable](b ) . by coordinating the transmissions of multiple coils via jointly allocating their source currents , in has been shown that their induced magnetic fields can be directed more efficiently toward one or more receivers at the same time , thus achieving a _magnetic beamforming _ gain in a manner analogous to multi - antenna beamforming in the far - field wireless communication and power transfer .in addition , distributed wpt significantly shortens the distance from each receiver to its nearest transmitter(s ) compared to centralized wpt , thus achieves more uniform charging performance in the region .the optimal magnetic beamforming design in multiple - input single - output ( miso ) mrc - wpt systems has been investigated in .specifically , has formulated a convex optimization problem to jointly optimize the source currents at all transmitters to maximize the wpt efficiency subject to that the deliverable power to the receiver load is fixed . on the other hand, has jointly optimized the transmitter currents to maximize the deliverable power to a single load by considering a sum - power constraint for all transmitters as well as practical peak voltage and current constraints at each individual transmitter . recently, selective wpt has also been proposed for simo mrc - wpt systems in .this technique delivers power to only one selected receiver ( i.e. , receiver selection ) at each time to eliminate the magnetic cross - coupling effect among different receivers and hence achieve more balanced power transfer to them , assuming that their natural frequencies are set well separated from each other .alternatively , has proposed to jointly optimize the load resistance of all receivers in a simo mrc - wpt system to manage the magnetic cross - coupling effect and further alleviate the near - far issue by delivering balanced power to all receivers regardless of their distances to the power transmitter .the selective wpt technique has also been utilized in miso mrc - wpt systems where one transmitter is selected at each time ( i.e. , transmitter selection ) to deliver wireless power to a single receiver . in general, selective wpt requires a simpler control mechanism than magnetic beamforming , while its performance is also limited since only one pair of transmitter and receiver is allowed for power transfer at each time .in contrast , magnetic beamforming enables multiple transmitters to send power to one or more receivers simultaneously by properly assigning the currents in the transmitters and/or resistance values in the receivers , thus in general achieving better performance than transmitter / receiver selection .the studies in have shown promising directions to improve the efficiency as well as performance fairness in mrc - wpt systems , but all of which have assumed that the transmitters and receivers are at given locations in a target region . in practice , a wireless device ( e.g. , mobile phone ) is desired to be freely located in any position in the region ( e.g. , above a charging table ) when it is being charged , for more convenience of its user . in this case , given fixed locations of the transmitters in a msio mrc - wpt system , the deliverable power at a single receiver can fluctuate significantly over its locations .such power fluctuation degrades the quality of service , since the power requirement of the receiver load may not be satisfied at all locations in the region , even when magnetic beamforming is applied to dynamically adjust the magnetic fields according to the instantaneous location of the receiver .this thus motivates our work in this paper to optimize the transmitter locations and magnetic beamforming jointly in a miso mrc - wpt system to achieve the maximum and yet uniformly deliverable power in the region .the main results of this paper are summarized as follows : * first , we formulate the magnetic beamforming problem for a miso mrc - wpt system with distributed transmitters to maximize the deliverable power to a single receiver subject to a given transmitters sum - power constraint , by assuming that the power transmitters and receiver are all at fixed locations .we derive the closed - form solution to the magnetic beamforming problem as a function of the mutual inductances between the transmitters and the receiver .our solution shows that the optimal current allocated to each transmitter is proportional to the mutual inductance between its coil and that of the receiver .for the special case when the transmitters are sufficiently separated from each other , we show that the optimal magnetic beamforming reduces to the simple transmitter selection scheme where all power is allocated to the single transmitter that has the highest mutual inductance with the receiver . * to demonstrate the performance gain of magnetic beamforming , we compare it to an uncoordinated wpt system with equal current allocation over all transmitters , as well as the transmitter selection scheme .furthermore , we compare the performance of distributed wpt with magnetic beamforming versus centralized wpt subject to the same total size of transmitter coils .* next , with the optimal magnetic beamforming solution derived , we formulate the node placement problem to jointly optimize the transmitter locations to maximize the minimum deliverable power to the receiver over a given one - dimensional ( 1d ) target region , i.e. , a line of finite length where the receiver can be located in any point in the line . although the formulated problem is non - convex , we propose an iterative algorithm for solving it approximately by leveraging the fact that the transmitters should be symmetrically located over the mid - point of the target line to maximize the minimum deliverable power .we present extensive simulation results to verify the effectiveness of our proposed transmitter location optimization algorithm in improving both the minimum deliverable power as well as the average deliverable power over the target line as compared to a heuristic design that uniformly locates the transmitters .* at last , we extend the node placement problem to the general two - dimensional ( 2d ) region case , i.e. , a disk in 2d with a finite radius . using an example of five transmitters ,we show that the design approach for the 1d case can be similarly applied to obtain the optimal locations of the transmitters under the 2d setup with magnetic beamforming to maximize the minimum deliverable power over the target disk by exploiting its circular symmetry .the rest of this paper is organized as follows .section ii introduces the system model .section iii formulates the magnetic beamforming problem and presents its optimal solution .section iv formulates the node placement problem for the 1d target region case , and presents an iterative algorithm for solving it .section v presents simulation results for the 1d case .section vi extends the node placement problem to the 2d target region case with an example of five transmitters .finally , we conclude the paper in section vii .in this paper , we consider a miso mrc - wpt system with identical single - coil transmitters , indexed by , , and a single - coil receiver , indexed by for convenience .it is assumed that all the transmitters and receiver are each equipped with a bluetooth communication module to enable information exchange among them to achieve coordinated wpt .each transmitter is connected to a stable energy source supplying sinusoidal voltage over time given by , where is a complex number denoting the steady state voltage in phasor form and denotes its angular frequency .note that represents the real part of a complex number . on the other hand, the receiver is connected to an electric load , e.g. , the battery of a mobile phone .let , where , with , denotes the steady state current in transmitter .this current produces a time - varying magnetic flux in the transmitter coil , which passes through the receiver coil and induces time - varying current in it .we denote , with , as the steady state current at the receiver . for the time being , we consider the case of 1d region for wpt , which is a straight line of finite length , with .specifically , the receiver can move horizontally along the line with its x - coordinate satisfying , which lies in the plane with , , as shown in fig .[ fig : miso - linear ] . note that denotes the absolute value of a real / complex number .the transmitters are installed at fixed locations along the line that is in parallel with the target line in the plane with .this 1d target line model is mainly used for the purpose of exposition , while it may be also applicable to practical scenarios such as a magnetic train that moves over segments of tracks each equipped with a number of distributed wireless power transmitters for charging the train .let with ( with ) denote the location of transmitter ( receiver ) over the x - axis . in this paper , we consider that s are symmetric over . that such symmetric structure of the transmitters maximizes the minimum power deliverable to the receiver over the target line .] hence , we consider the following two cases for the symmetric deployment of the transmitters .* case : is even . in this case , let and we set , with , , as shown in fig .[ fig : miso - linear](a ) .* case : is odd . in this case , let and we set , with , , and , as shown in fig .[ fig : miso - linear](b ) .let ( ) , ( ) , and ( ) denote the parasitic resistance , the self - inductance , and the capacity of the compensator in each transmitter ( receiver ) , respectively .denote as the resistance of the load at the receiver .accordingly , we use to represent the total ohmic resistance of the receiver . by assuming that the coil of each of the transmitters as well as the receiver consists of multiple closely winded turns of round - shaped wire , we obtain and , where ( ) , ( ) , ( ) , and ( ) are the average radius of the coil of each transmitter ( receiver ) , the radius of the wire used to make the coil , the ohmic resistivity of the wire , and the number of turns of the coil , respectively . furthermore , we obtain and , where / a is the magnetic permeability of the air . the capacities of compensators are then chosen such that the natural frequencies of the transmitters and receiver become the same as the source angular frequency , i.e. , we set and .let and be real numbers denoting the mutual inductance between the coils of transmitters and , with , as well as that between transmitter and the receiver , respectively . based on the so - called conway s mutual inductance formula , we obtain where , , and is the bessel function of the first kind of order with denoting the the factorial of a positive integer .the integration terms in ( [ eq : h_nk ] ) and ( [ eq : h_n0 ] ) can be computed numerically , while there are no closed - form analytical expressions for them . in practice, the transmitters and receiver commonly use small coils for wpt ; therefore , in ( [ eq : h_n0 ] ) can be simplified as follows .[ lemma : mutualinductance ] if , we have where is a constant with the given coil parameters .please see appendix a. to validate the accuracy of the proposed approximation in ( [ eq : hn0-simplified ] ) , we consider the following setup . particularly , we consider case in fig . [fig : miso - linear](b ) with identical transmitters , m , and variable , where the physical and electrical characteristics of the coils in the transmitters and receiver are given in tables [ tab : physical - charac ] and [ tab : electrical - charac ] ( see section [ sec : numerical example ] ) , respectively .we assume that the transmitters are uniformly located over m , with m , m , and [ fig : mutual_inductance_veification](a ) and [ fig : mutual_inductance_veification](b ) compare the actual and approximated values of the mutual inductance between transmitter and the receiver , , versus the receiver s x - coordinate under fixed height cm and cm , respectively .it is observed that the approximation is tight in general ; whereas there are small discrepancies at .it is also observed that the discrepancies decrease when increases .note that the similar result can be obtained for the mutual inductance between other transmitters and the receiver , while the peak value of the mutual inductance shifts over the x - axis accordingly , i.e. , it moves from to when transmitter is considered instead of transmitter . in this paper , for simplicity we will use the approximation in ( [ eq : hn0-simplified ] ) to design the node placement for the transmitters in sections [ sec : node placement optimization ] and [ sec : node placement optimization 2d ] , while the actual value in ( [ eq : h_n0 ] ) is used for all simulations to achieve the best accuracy .[ t ! ] by applying kirchhoff s circuit laws to the electric circuits of the transmitters and receiver in our considered mrc - wpt system shown in fig .[ fig : electric - circuit ] , we obtain accordingly , the average power drawn from the energy source at transmitter , denoted by , and that delivered to the load at the receiver , denoted by , are obtained as where denotes the conjugate of a complex number . furthermore , with the result given in ( [ eq : p_n ] ) , the sum - power drawn from all transmitters is obtained as from ( [ eq : p_n ] ) and ( [ eq : p_sum ] ) , it follows that the power consumption of each individual transmitter depends on all the mutual inductances between the transmitters and the receiver , s , as well as those between any pair of transmitters , s , while their sum - power consumed depends on s only . from ( [ eq : p_0 ] ) and ( [ eq : p_sum ] ) , it also follows that the real - part currents s and the imaginary - part currents s contribute in the same way to or .consequently , in this paper , we set , , and focus on designing s without loss of generality .moreover , due to the fact that each is a function of both and with given ( see , e.g. , ( [ eq : hn0-simplified ] ) ) , we re - express and in ( [ eq : p_0 ] ) and ( [ eq : p_sum ] ) as functions of , s , and s as next , we introduce four metrics to evaluate the performance of the mrc - wpt system considered in this paper , which are the average value , the minimum value , the maximum value , and the min - max ratio of the deliverable power to the receiver load over the target line ( or target region in general ) , defined as note that both the transmitter currents s and the transmitter locations s can have an influence on each of the above performance metrics for the mrc - wpt system ; therefore , we need to design them jointly to optimize each corresponding performance in general . in practice , it is desirable to have both large and to maximize the wpt efficiency , and yet have acceptably high and to achieve uniform performance over the target region . however , in general , there are trade - offs in achieving these objectives at the same time , e.g. , maximizing versus . in the rest of this paper , we first design the magnetic beamforming via adjusting s by assuming fixed locations of the transmitters and receiver ( s and ) to maximize the deliverable power subject to a given sum - power constraint for all transmitters .next , with the obtained optimal magnetic beamforming solution , we optimize the transmitter locations s to maximize the minimum power deliverable to the receiver over the target region , i.e. , given in ( [ eq : p0min ] ) , for both the cases of 1d and 2d target regions , respectively .in this section , we first present the results on the magnetic beamforming optimization .we then use a numerical example to demonstrate the performance advantage of optimal distributed magnetic beamforming .assume that s and are given , and hence the mutual inductance values s are known .we formulate the magnetic beamforming problem for designing the transmitter currents s to maximize the deliverable power to the receiver load , given in ( [ eq : fun_p_0 ] ) , subject to a maximum sum - power constraint at all transmitters , denoted by , as follows . ( p1 ) can be shown to be a non - convex quadratically constrained quadratic programming ( qcqp ) problem , since its objective is to maximize a convex quadratic function in ( [ eq : p1_obj ] ) . however , we obtain the optimal solution to ( p1 ) in the following proposition . [proposition : optimal_current ] the optimal solution to ( p1 ) is given by , , with please see appendix b. from ( [ eq : u_n ] ) , it follows that the current allocated to each transmitter is proportional to the mutual inductance between its coil and that of the receiver , .moreover , it can be seen that when there exists an such that , , then .this means that all transmit power is allocated to transmitter which has the dominant mutual inductance magnitude with the receiver ( e.g. , when the receiver is directly above transmitter and more far apart from its adjacent transmitters ) , i.e. , the transmitter selection technique is optimal . to implement the optimal magnetic beamforming solution in practice , each transmitter needs to estimate the mutual inductance between its coil and that of the receiver , , in real time , and send it to a central controller via e.g. the bluetooth communication considered in the rezence specification . given the information received from all transmitters , the central controller computes the optimal transmitter currents s and sends them to the individual transmitters for implementing distributed magnetic beamforming . as shown in fig .[ fig : electric - circuit ] , it is more convenient to use voltage source than current source at the transmitters in practice .from ( [ eq : in ] ) , we can easily obtain the optimal source voltages s that generate the optimal currents s for practical implementation .next , by substituting , , in ( [ eq : fun_p_0 ] ) , the power delivered to the load with optimal magnetic beamforming is given by from ( [ eq : p_0^ * ] ) , it follows that the deliverable power with optimal magnetic beamforming is a function of s , hence invariant to the signs of individual s .this is expected since optimal magnetic beamforming constructively combines the magnetic flux generated by individual transmitters at the receiver ./ ( cm ) & number of turns / & material of wire & radius of wire / ( mm ) & resistivity of wire / ( /m ) + transmitter & & & copper & 0.1 & + receiver & & & copper & 0.1 & + [ tab : physical - charac ] / ( ) & self - inductance / ( mh ) & series compensator / ( ff ) + transmitter & & & + receiver & & & + [ tab : electrical - charac ] we consider an mrc - wpt system with identical transmitters and a single receiver that is connected a load with resistance .the physical and electrical characteristics of coils in the transmitters and receiver are given in tables [ tab : physical - charac ] and [ tab : electrical - charac ] , respectively .we set cm , m , / sec , and . in this example, we assume that transmitters are uniformly located over m , with m , m , and . for performance comparison, we also consider the uncoordinated wpt with equal current allocation over all transmitters , as well as the transmitter selection technique which only selects the transmitter with the largest mutual inductance ( squared ) value with the receiver for wpt with the full transmit power , .[ t ! ] [ t ! ] fig .[ fig : p0_3tx_x0 ] compares the deliverable load power given in ( [ eq : fun_p_0 ] ) versus the receiver location by three schemes : equal ( transmitter ) current with uniform ( transmitter ) location ( ecul ) , optimal ( transmitter ) current with uniform ( transmitter ) location ( ocul ) , and transmitter selection with uniform ( transmitter ) location ( tsul ) .it is observed that ecul in general delivers higher power to the load than ocul and tsul , and also achieves a larger minimum power over the receiver location .it is also observed that the three schemes all tend to deliver more power to the load when the receiver is in close proximity of one of the transmitters at , m , and m .furthermore , it is observed that tsul performs quite close to ocul except in the middle areas between any two adjacent transmitters , where the minimum deliverable power occurs .this observation is expected since when the receiver is in the middle of any two adjacent transmitters , optimal magnetic beamforming with both transmitters delivering power to the receiver load achieves a more pronounced combining gain as compared to the transmitter selection with only one of the two transmitters selected for wpt .next , we show the performance of centralized wpt , where a single transmitter is located at which sends wireless power to a receiver moving along the target line . for this centralized transmitter case , we set turns and mm , where the radius of its coil is times larger than that of each transmitter in the case of distributed wpt for fair comparison .[ fig : p0_1tx_x0 ] plots for centralized wpt versus .it is observed that the deliverable power to the load is zero at m , while its global and local maximums occur at and m , respectively .note that from ( [ eq : h_n0 ] ) , it follows at m ; as a result , by setting in ( [ eq : p_0 ] ) , we have , regardless of the transmit current . the details of performance comparison between distributed versus centralized wpt in terms of the four metrics introduced in section [ sec : system - setup ] ( see ( [ eq : p0ave])([eq : min - max - ratio ] ) ) are given in table [ tab : quantitivecomparision ] . ( w ) & ( w ) & ( % ) + & ocul & & & & + & ecul & & & & + & tsul & & & & + & & & & + it is observed that distributed wpt with ocul and tsul achieves similar and slightly better compared to centralized wpt . however , in terms of and the min - max load power ratio , distributed wpt achieves significant improvement over centralized wpt .although distributed wpt with ocul achieves the highest of , it is still far from the ideal uniform power profile with . to further improve this performance , in the next section, we will formulate the node placement problem to design the transmitter locations to maximize the minimum deliverable power to the load over the target line jointly with the optimal magnetic beamforming .it is worth pointing out that the transmitter locations can be optimized with magnetic beamforming to improve other performance metrics such as maximizing the average load power , maximizing the min - max ratio of the load power , etc . , which will lead to different optimal transmitter locations in general .we leave other possible node placement problem formulations to our future work .in this section , we first present the node placement optimization problem for the 1d target region case , and then propose an iterative algorithm to solve it . let denote the minimum deliverable power to the load over the target line in the 1d case .the node placement problem is formulated as with given in ( [ eq : p_0^ * ] ) .first , it can be easily shown by contradiction that the optimal solution s to ( p2 ) must be symmetric over , as shown in fig .[ fig : miso - linear ] . with symmetric transmitter locations, then it follows that the load power profile is also symmetric over ; as a result , the constraint ( [ eq : p2_c1 ] ) only needs to be considered over . with the above observations , we simplify ( p2 ) for the cases of even and odd , respectively , as follows .when is even , we have where is defined as note that since it can be easily verified that the constraint in ( [ eq : p2_c1 ] ) is infeasible regardless of when , we define for in ( [ eq : const1:p2-e ] ) for convenience . on the other hand ,when is odd , we have ( p2 ) and ( p2 ) are both non - convex optimization problems due to the constraints in ( [ eq : const1:p2-e ] ) and ( [ eq : const1:p2-o ] ) , respectively .thus , it is difficult to solve them optimally . in the following ,we propose an iterative algorithm to obtain approximate solutions for them . in this subsection , we focus on the problem ( p2 ) for the even case , while the proposed algorithm can be similarly applied for ( p2 ) in the odd case . in ( p2 ) , we need to find the largest , , under which the problem is feasible over all possible transmitter ( one - sided ) locations s . to this end , we apply the bisection method to find the largest by using the fact that if ( p2 ) is not feasible for a certain , , then it can not be feasible for . similarly ,if ( p2 ) is feasible for , then it must be feasible for .the detail of our proposed algorithm is given in the following .initialize and . at each iteration , we first set , and test the feasibility of ( p2 ) given by considering the following feasibility problem . if ( p2f ) is feasible , we save its solution as , , and update to search for larger values of in the next iteration .otherwise , if ( p2f ) is infeasible , we update to search for smaller values of in the next iteration .we stop the search when , where is a small constant controlling the algorithm accuracy .it can be easily shown that the algorithm converges after about iterations .after convergence , we return as the solution to ( p2 ) , and set , , as the solution to ( p2 ) for the even case .next , we focus on solving the feasibility problem ( p2f ) at each iteration . since ( p2f ) is non - convex , we use the following gradient based method to search for a feasible solution to this problem in an iterative manner .initialize , . at each iteration , given s, we check whether the constraint ( [ eq : const1:p2-e ] ) holds or not .if the constraint holds , we return s as a feasible solution to ( p2f ) and stop the search ; otherwise , we update s as follows .first , we find , which can be numerically obtained with given s .next , by setting in the left hand side ( lhs ) of ( [ eq : const1:p2-e ] ) , which gives the minimum deliverable power to the load over the target line , , with the given transmitter locations s , the derivative of with respect to each is computed as accordingly , we set if , or otherwise , , with denoting a small step size .it can be easily verified that the above update helps increase if is chosen to be sufficiently small .we repeat the above procedure for a maximum number of iterations , denoted by , after which we stop the search and return that ( p2f ) is infeasible since the constraint ( [ eq : const1:p2-e ] ) still does not hold with all s derived . in practice ,the performance of the gradient - based search for the feasible solution to ( p2f ) depends on the initial values of s as the search in general converges to a local maximum of the lhs function of ( [ eq : const1:p2-e ] ) . to improve the accuracy of the search ,if it fails to find a feasible solution to ( p2f ) after iterations , then we repeat the search with a new initial point given by , , with randomly generated which is uniformly distributed over $ ] .the maximum number for the set of randomly generated initial points is limited by , and we decide ( p2f ) is infeasible if we fail to find a feasible solution to ( p2f ) with all sets of initial points generated . in general , a larger helps improve the overall accuracy of the bisection search , but at the cost of more computational complexity . to summarize , the complete algorithm to solve ( p2 ) is given in table [ algorithm:1 ] , denoted by algorithm 1 . ' '' '' * algorithm * ' '' '' * initialize , , , , , and . * * while do * : * * set .* * set , , , and , . * * * * while * , , and : * * * * given s , check the constraint ( [ eq : const1:p2-e ] ) . *if * it holds , * then * set and go to step 3 ) ; * otherwise * , find the derivatives s as in ( [ eq : gradient ] ) and set if , or otherwise , . * * * * set . * * * * * if * and , * then * reset the initial points as , . set and . * * * if * , * then * set , , and ; * otherwise * set . *return s as the solution to ( p2 ) . ' '' '' [ algorithm:1 ]in this section , we present further simulation results to evaluate the performance of the proposed transmitter node placement algorithm , i.e. , algorithm .we consider the same system setup as that in section [ sec : numerical example ] , with identical transmitters . since is odd here ,we modify algorithm for the even case to apply it for our considered system setup with transmitters .we set , , , and .first , fig .[ fig : optimallocaion ] shows the optimized ( transmitter ) locations ( ol ) , i.e. , s given by algorithm , versus the uniform ( transmitter ) locations ( ul ) .it is observed that for ol , except the transmitter that is located below the center of the target line ( ) , the other four transmitters all move closer to the center compared to ul .[ t ! ] [ t ! ] ( w ) & ( w ) & ( % ) + ocol & & & & + ecol & & & & + ocul & & & & + [ tab : quantitivecomparision2 ] next , fig .[ fig : nodepalacement_3tx ] compares the deliverable power to the load , given in ( [ eq : fun_p_0 ] ) , versus the receiver location , under three schemes : optimal ( transmitter ) current with optimized ( transmitter ) location ( ocol ) , equal ( transmitter ) current with optimized ( transmitter ) location ( ecol ) , and optimal ( transmitter ) current with uniform ( transmitter ) location ( ocul ) .it is observed that ocol with both optimized transmitter locations and optimal magnetic beamforming improves the minimum deliverable power significantly over the other two schemes with only optimized transmitter locations or optimal magnetic beamforming .in fact , ocol achieves the best performance in terms of all metrics , where the details are given in table [ tab : quantitivecomparision2 ] . besides , fig .[ fig : p0min_d ] plots the minimum load power given in ( [ eq : p0min ] ) versus the target line length , under the three schemes .it is observed that ocol consistently achieves better performance than the other two schemes , although the gain decreases when is small or large .this can be explained as follows .when is small , the mutual inductance between the receiver and different transmitters is less sensitive to their locations , which implies that the gain of transmitter placement optimization is small . in this case , from ( [ eq : u_n ] ) , it follows that the transmitter currents tend to be all equal , hence the magnetic beamforming gain over the equal current allocation is also negligible .similarly , when is large , the distance between transmitters is large for both ul and ol designs , since there are only five transmitters available to cover the target line . in this case , the magnetic coupling between the transmitters is small , hence they can be treated as independent transmitters . as shown in fig .[ fig : p0_1tx_x0 ] , using a single transmitter for wpt can not provide any magnetic beamforming gain . as a result ,both transmitter location and current optimization do not yield notable performance gains .last , we consider the practical case where the total length of coil wires for manufacturing all transmitters is fixed as m , and thus the radius of each individual transmitter coil shrinks as increases .accordingly , we set the transmitter coil radius as in mm and keep the number of the turns fixed as regardless of .the other parameters of the coils are assumed to be the same as in section [ sec : numerical example ] .[ fig : p0min_n ] plots the minimum load power over the number of transmitters , with each of the three schemes .it is observed that for all three schemes , first increases and then decreases with .this implies that using either a small number of transmitters each with larger coil or a large number of transmitters each with smaller coil is both inefficient in maximizing the minimum deliverable power .note that for the case of , i.e. , centralized wpt , , which is in accordance with the result in fig .[ fig : p0_1tx_x0 ] . [ t ! ] [ t ! ]in this section , we extend the node placement optimization to the 2d target region case .we assume that the receiver can move horizontally within a disk of radius which has a fixed height of and the center point at , while all transmitters are placed in the z - plane with in parallel to the target region .let , with , ( , with ) denote the location of transmitter ( receiver ) . in this case ,the mutual inductance expressions given in ( [ eq : h_nk ] ) and ( [ eq : h_n0 ] ) as well as the approximation given in ( [ eq : hn0-simplified ] ) can be modified by setting and .accordingly , the transmitters sum power and the deliverable power to the receiver load given in ( [ eq : p_sum ] ) and ( [ eq : p_0 ] ) can be re - expressed as functions of , s , and s as and , respectively. define , which is a convex set over and .the four performance metrics introduced for the 1d case , i.e. , the average value , the minimum value , the maximum value , and the min - max ratio of the load power given in ( [ eq : p0ave])([eq : min - max - ratio ] ) , can be redefined as moreover , with the optimal transmitter currents given in ( [ eq : u_n ] ) for magnetic beamforming , the deliverable power to the load in ( [ eq : p_0^ * ] ) can be rewritten as . similar to ( p2 ) for the 1d case , we formulate the node placement problem to maximize the minimum deliverable power to the receiver over in the 2d target region case as similar to the 1d case , it can be verified that the optimal transmitter locations in ( p3 ) must be _ circularly symmetric _ over a disk region . in general ,multiple circularly symmetric structures may exist for a disk target region with different values of , as shown in fig .[ fig : symstructure ] for the system of transmitters , where in total three circularly symmetric structures exist . denote as the number of all circularly symmetric structures for a given .for each structure , , we first simplify ( p3 ) by exploiting the symmetry in the structure , and then solve it using a similar algorithm like algorithm for the 1d case .let and denote the optimized transmitter locations and the resulting minimum load power for structure , respectively .the optimal solution to ( p3 ) is thus given by , where .next , we illustrate the above procedure for the case of transmitters , while the approach is general and can be applied to the cases with other values . for structure shown in fig .[ fig : symstructure](a ) , ( p3 ) is simplified as where , , and , with ( the regions of for structures and are shown in figs .[ fig : symstructure](b ) and [ fig : symstructure](c ) , respectively ) . in ( p3 ) , is the single decision variable ( with as an auxiliary variable ) , hence algorithm can be easily modified to solve this problem .let denote the obtained solution to ( p3 ) .accordingly , we set , , for structure .similarly , we can simplify ( p3 ) for structure and , but we need to jointly optimize two decision variables and for these two structures , as shown in fig . [ fig : symstructure](b ) and fig .[ fig : symstructure](c ) , respectively .the details are omitted for brevity . to illustrate the performance of joint magnetic beamforming and transmitter location optimization in the 2d target region case, we consider the same system parameters as in section [ sec : numerical example ] for the 1d target line , which is now replaced by a disk target region of radius m , i.e. , m in diameter , where the target region area ( ) is about ten times larger than the sum - area of all transmitter coils ( ) . as shown in figs .[ fig : symstructure ] , three circularly symmetric structures exist for the system of transmitters . after obtaining the optimized transmitter locations for these structures , we have m with for structure . for structure , we similarly obtain m and . for structure , we obtain m , m and .since , it follows that structure has the best performance in terms of maximizing the minimum deliverable power to the load over the 2d target region . ( w ) & ( w ) & ( % ) + & & & & + & & & & + & & & & + [ tab : quantitivecomparision3 ] figs .[ fig:2d - p0](a ) , [ fig:2d - p0](b ) , and [ fig:2d - p0](c ) show the deliverable load power distribution over the 2d disk region by structures , , and , respectively , with the optimized transmitter locations in each structure case and optimal magnetic beamforming .the detailed performance comparison among the three structures is also summarized in table [ tab : quantitivecomparision3 ] , from which it is observed that the minimum deliverable power achieved by structure is much larger than those of the other two structures .in this paper , we have studied the node placement optimization for a miso mrc - wpt system with optimal distributed magnetic beamforming .first , we propose the optimal magnetic beamforming solution to jointly assign the currents at different transmitters subject to their sum - power constraint with given locations of the transmitters and receiver .we show that although distributed wpt with optimal magnetic beamforming achieves better performance than centralized wpt , the resulting load power profile still fluctuates over a given target region considerably . to tackle this issue, we formulate a node placement problem to jointly optimize the transmitter locations to maximize the minimum power delivered to the load over a 1d target region .we propose an efficient algorithm for solving this problem based on bisection method and gradient - based search , which is shown by simulation to be able to improve the load power distribution significantly .finally , we extend our design approach to the general case of 2d target region and show that significant performance gain can also be achieved in this case . in this paper , for simplicity we assume identical transmitter coils of equal size , while the performance of wpt may be further improved if the sizes of transmitter coils can be optimized jointly with the transmitter locations , an interesting problem worthy of further investigation . in ( [ eq : h_n0 ] ) , we can express , with .given , we have over , since its maximum value over is , with , which decreases to zero as and for .hence , we can simplify ( [ eq : h_n0 ] ) as where .next , let , where denotes a real number and represents the laplace transformer . specifically , we have it is known that for any real function , with denoting its laplace transform , we have , and so on . for ( p1 ) , the optimal current solution s to ( p1 ) can be obtained by leveraging the karush - kuhn - tucker ( kkt ) conditions of the optimization problem .let denote the dual variable corresponding to the constraint ( [ eq : p1_c1 ] ) .the lagrangian of ( p1 ) is then written as the karush - kuhn - tucker ( kkt ) conditions of ( p1 ) are thus given by where ( [ eq : primalfes ] ) and ( [ eq : dualfes ] ) are the feasibility conditions for the primal and dual solutions , respectively , ( [ eq : kkt - zero - deriv ] ) is due to the fact that the gradient of the lagrangian with respect to the optimal primal solution s must vanish , and ( [ eq : kkt - compl - slackn ] ) stands for the complimentary slackness . to solve the set of equations in ( [ eq : primalfes])([eq : kkt - compl - slackn ] ) , we consider two possible cases as follows . case 1 : .it can be verified that any set of s satisfying and can satisfy the kkt conditions ( [ eq : primalfes])([eq : kkt - compl - slackn ] ) in this case .however , the resulting s will make the objective function of ( p1 ) in ( [ eq : p1_obj ] ) equal to zero , which can not be the optimal value of ( p1 ) ; as a result , this case can not lead to the optimal solution to ( p1 ) . case 2 : . from ( [ eq : kkt - zero - deriv ] ) , it follows that , .moreover , from ( [ eq : kkt - compl - slackn ] ) , it follows that .accordingly , we obtain , , and , where the obtained s and satisfy the kkt conditions ( [ eq : primalfes])([eq : kkt - compl - slackn ] ) . note that except the above set of primal and dual solutions to ( p1 ) , s and , given in case , there is no other solution that satisfies the kkt conditions ( [ eq : primalfes])([eq : kkt - compl - slackn ] ) .thus , we can conclude that the solution obtained in case is indeed the optimal solution to ( p1 ) because the kkt conditions are necessary ( albeit not necessarily sufficient ) for the optimality of a non - convex optimization problem , which is the case of ( p1 ) . the proof is thus completed .a. kurs , a. karalis , r. moffatt , j. d. joannopoulos , p. fisher , and m. soljacic , `` wireless power transfer via strongly coupled magnetic resonances , '' _ science _ ,83 - 86 , july 2007 .y. li , j. li , k. wang , w. chen , and x. yang , `` a maximum efficiency point tracking control scheme for wireless power transfer systems using magnetic resonant coupling , '' _ ieee trans .power electron ._ , vol . 30 , no . 7 , pp . 3998 - 4008 , july 2015 . j. shin , s. shin , y. kim , s. ahn , s. lee , g. jung , s. jeon , and d. cho , `` design and implementation of shaped magnetic - resonance - based wireless power transfer system for roadway - powered moving electric vehicles , '' _ ieee trans .1179 - 1192 , mar .2014 .y. zhang , t. lu , z. zhao , f. he , k. chen , and l. yuan , `` selective wireless power transfer to multiple loads using receivers of different resonant frequencies , '' _ ieee trans .power electron ._ , vol . 30 , no .11 , pp . 6001 - 6005 , nov . 2015 .kim , d. ha , w. j. chappell , and p. p. irazoqui , `` selective wireless power transfer for smart power distribution in a miniature - sized multiple - receiver system , '' _ ieee trans .1853 - 1862 , mar .m. r. v. moghadam and r. zhang , `` multiuser wireless power transfer via magnetic resonant coupling : performance analysis , charging control , and power region characterization , '' _ ieee trans .signal inf . process .72 - 83 , mar .2016 .r. tseng , b. novak , s. shevde , and k. grajski , `` introduction to the alliance for wireless power loosely - coupled wireless power transfer system specification version 1.0 , '' in _ proc .wireless power transfer ( wpt ) _ , pp .79 - 83 , may 2013 . | in multiple - input single - output ( miso ) wireless power transfer ( wpt ) via magnetic resonant coupling ( mrc ) , multiple transmitters are deployed to enhance the efficiency of power transfer to the electric load at a single receiver by jointly optimizing their source currents to constructively combine the induced magnetic fields at the receiver , known as _ magnetic beamforming_. in practice , since the receiver is desired to be freely located in a target region for wireless charging , its received power can fluctuate significantly over locations even with adaptive magnetic beamforming applied . to achieve uniform coverage , the transmitters need to be optimally placed such that a minimum charging power can be achieved for the receiver regardless of its location in the region , which motivates this paper . first , we drive the optimal magnetic beamforming solution in closed - form for a distributed miso wpt system with fixed locations of the transmitters and receiver to maximize the deliverable power to the receiver subject to a given sum - power constraint at all transmitters . with the optimal magnetic beamforming derived , we then jointly optimize the locations of all transmitters to maximize the minimum power deliverable to the receiver over a given one - dimensional ( 1d ) region . although the problem is non - convex , we propose an iterative algorithm for solving it efficiently . extensive simulation results are provided which show the significant performance gains by the proposed design with optimized transmitter locations and magnetic beamforming as compared to other benchmark schemes with non - adaptive or heuristic currents allocation and transmitters placement . last , we extend our approach to the general two - dimensional ( 2d ) region case , and highlight the key insights for practical design . wireless power transfer , magnetic resonant coupling , magnetic beamforming , node placement optimization , uniform coverage . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] |
in his seminal paper `` the future of data analysis '' ( tukey , ) , john w. tukey writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` it will still be true that there will be aspects of data analysis well called technology , but there will also be the hallmarks of stimulating science : intellectual adventure , demanding calls upon insight , and a need to find out ` how things really are ' by investigation and the confrontation of insights with experience '' ( p. 63 ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ fast forward to 2013 in the age of information technology , these words of tukey ring as true as fifty years ago , but with a new twist : the ubiquitous and massive data today were impossible to imagine in 1962 . from the point of view of science , information technology and data are a blessing , and a curse .the reasons for them to be a blessing are many and obvious .the reasons for it to be a curse are less obvious .one of them is well articulated recently by two prominent biologists in an editorial casadevall and fang ( ) in _ infection and immunity _ ( of the american society for microbiology ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` although scientists have always comforted themselves with the thought that science is self - correcting , the immediacy and rapidity with which knowledge disseminates today means that incorrect information can have a profound impact before any corrective process can take place '' ( p. 893 ) . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` a recent study analyzed the cause of retraction for 788 retracted papers and found that error and fraud were responsible for 545 ( 69% ) and 197 ( 25% ) cases , respectively , while the cause was unknown in 46 ( 5.8% ) cases ( 31 ) '' ( p. 893 ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the study referred is steen ( ) in the _ journal of medical ethics_. of the 788 retracted papers from pubmed from 2000 to 2010 , 69% are marked as `` errors '' on the retraction records .statistical analyses are likely to be involved in these errors .casadevall and fang go on to call for `` enhanced training in probability and statistics , '' among other remedies including `` reembracing philosophy . ''more often than not , modern scientific findings rely on statistical analyses of high - dimensional data , and reproducibility is imperative for any scientific discovery .scientific reproducibility therefore is a responsibility of statisticians . at a minimum, reproducibility manifests itself in the stability of statistical results relative to `` reasonable '' perturbations to data and to the method or model used .reproducibility of scientific conclusions is closely related to their reliability .it is receiving much well - deserved attention lately in the scientific community ( e.g. , ioannidis , ; kraft et al ., , casadevall and fang , ; nosek et al . , ) and in the media ( e.g. , naik , ; booth , ) . drawing a scientific conclusion involves multiple steps .first , data are collected by one laboratory or one group , ideally with a clear hypothesis in the mind of the experimenter or scientist . in the age of information technology , however , more and more massive amounts of data are collected for fishing expeditions to `` discover '' scientific facts .these expeditions involve running computer codes on data for data cleaning and analysis ( modeling and validation ) .before these facts become `` knowledge , '' they have to be reproduced or replicated through new sets of data by the same group or preferably by other groups .given a fixed set of data , donoho et al .( ) discuss reproducible research in computational hormonic analysis with implications on computer - code or computing - environment reproducibility in computational sciences including statistics .fonio et al .( ) discuss replicability between laboratories as an important screening mechanism for discoveries .reproducibility could have multitudes of meaning to different people .one articulation on the meanings of reproducibility , replication , and repeatability can be found in stodden ( ) . in this paper, we advocate for more involvement of statisticians in science and for an enhanced emphasis on stability within the statistical framework .stability has been of a great concern in statistics .for example , in the words of hampel et al .( ) , `` robustness theories can be viewed as stability theories of statistical inference '' ( p. 8) . even in low - dimensional linear regression models, collinearity is known to cause instability of ols or problem for individual parameter estimates so that significance testing for these estimates becomes unreliable . herewe demonstrate the importance of statistics for understanding our brain ; we describe our methodological work on estimation stability that helps interpret models reliably in neuroscience ; and we articulate how our solving neuroscience problems motivates theoretical work on stability and robust statistics in high - dimensional regression models .in other words , we tell an interwinding story of scientific investigation and statistical developments . the rest of the paper is organized as follows . in section [ sec : fmri ] , we cover our `` intellectual adventure '' into neursocience , in collaboration with the gallant lab at uc berkeley , to understand human visual pathway via fmri brain signals invoked by natural stimuli ( images or movies ) ( cf .kay et al . , , naselaris et al . , , kay and gallant , , naselaris et al . , ) .in particular , we describe how our statistical encoding and decoding models are the backbones of `` mind - reading computers , '' as one of the 50 best inventions of 2011 by the time magazine ( nishimoto et al . , ) .in order to find out `` how things really are , '' we argue that reliable interpretation needs stability .we define stability relative to a data perturbation scheme . in section [ sec : stability ] , we briefly review the vast literature on different data perturbation schemes such as jacknife , subsampling , and bootstrap .( we note that data perturbation in general means not only taking subsets of data units from a given data set , but also sampling from an underlying distribution or replicating the experiment for a new set of data . ) in section [ sec : es - cv ] , we review an estimation stability ( es ) measure taken from lim and yu ( ) for regression feature selection . combining es with cv as in lim and yu ( )gives rise to a smoothing parameter selector es - cv for lasso ( or other regularization methods ) .when we apply es - cv to the movie - fmri data , we obtain a 60% reduction of the model size or the number of features selected at a negligible loss of 1.3% in terms of prediction accuracy .subsequently , the es - cv - lasso models are both sparse and more reliable hence better suited for interpretation due to their stability and simplicity .the stability considerations in our neuroscience endeavors have prompted us to connect with the concept of stability from the robust statistics point of view . in el karoui( ) , we obtain very interesting theoretical results in high - dimensional regression models with predictors and samples , shedding light on how sample variability in the design matrix meets heavier tail error distributions when is approximately a constant in or in the random matrix regime .we describe these results in an important special case in section [ sec : robust ] .in particular , we see that when and or so , the ordinary least squares ( ols ) estimator is better than the least absolute deviation ( lad ) estimator when the error distribution is double exponential .we conclude in section [ sec : conclusions ] .neuroscience holds the key to understanding how our mind works .modern neuroscience is invigorated by massive and multi - modal forms of data enabled by advances in technology ( cf .atkil , martone and van essen , ) .building mathematical / statistical models on this data , computational neuroscience is at the frontier of neuroscience .the gallant lab at uc berkeley is a leading neuroscience lab specializing in understanding the visual pathway , and is a long - term collaborator with the author s research group .it pioneered the use of natural stimuli in experiments to invoke brain signals , in contrast to synthetic signals such as white noise and moving bars or checker boards as previously done .simply put , the human visual pathway works as follows .visual signals are recorded by retina and through the relay center lgn they are transmitted to primary visual cortex areas v1 , on to v2 and v4 , on the `` what '' pathway ( in contrast to the `` where '' pathway ) ( cf . goodale and milner , ) .computational vision neuroscience aims at modeling two related tasks carried out by the brain ( cf .dayan and abbott , ) through two kinds of models .the first kind , the encoding model , predicts brain signals from visual stimuli , while the second kind , the decoding model recovers visual stimuli from brain signals .often , decoding models are built upon encoding models and hence indirectly validate the former , but they are important in their own right . in the september issue of _ current biology _ , our joint paper with the gallant lab , nishimoto et al .( ) invents a decoding ( or movie reconstruction ) algorithm to reconstruct movies from fmri brain signals .this work has received intensive and extensive coverage by the media including the economist s oct .29th 2011 issue ( `` reading the brain : mind - goggling '' ) and the national public radio in their program `` forum with michael krasny '' on tue , sept .27 , 2011 at 9:30 am ( `` reconstructing the mind s eye '' ) .as mentioned earlier , it was selected by the time magazine as one of the best 50 inventions of 2011 and dubbed as `` mind - reading computers '' on the cover page of the time s invention issue ._ what is really behind the movie reconstruction algorithm ? _ _ can we learn something from it about how brain works ?_ the movie reconstruction algorithm consists of statistical encoding and decoding models , both of which employ regularization .the former are sparse models so they are concise enough to be viewed and are built via lasso for each voxel separately . however , as is well - known lasso results are not stable or reliable enough for scientific interpretation due to the regularization and the emphasis of cv on prediction performance .so lasso is not estimation stable .the decoding model uses the estimated encoding model for each voxel and tiknohov regularization or ridge in covariance estimation to pull information across different voxels over v1 , v2 and v4 ( nishimoto et al . , ) .then an empirical prior for clips of short videos is used from movie trailers and youtube to induce posterior weights on video clips in the empirical prior database .tiknohov or ridge regularization concerns itself with the estimation of the covariance between voxels that is not of interest for interpretation .the encoding phase is the focus here from now on.=-1 v1 is a primary visual cortex area and the best understood area in the visual cortex .hubel and wiesel received a nobel prize in physiology or medicine in 1981 for two major scientific discoveries .one is hubel and wiesel ( ) that uses cat physiology data to show , roughly speaking , that simple v1 neuron cells act like gabor filters or as angled edge detectors .later , using solely image data , olshausen and field ( ) showed that image patches can be sparsely represented on gabor - like basis image patches .the appearance of gabor filters in both places is likely not a coincidence , due to the fact that our brain has evolved to represent the natural world .these gabor filters have different locations , frequencies and orientations .previous work from the gallant lab has built a filter - bank of such gabor filters and successfully used them to design encoding models with single neuron signals in v1 invoked by static natural image stimuli ( kay et al ., , naselaris et al . , ) . in nishimoto( ) , we use fmri brain signals observed over 2700 voxels in different areas of the visual cortex .fmri signals are indirect and non - invasive measures of neural activities in the brain and have good spatial coverage and temporal resolution in seconds .each voxel is roughly a cube of 1 mm by 1 mm by 1 mm and contains hundreds of thousands of neurons .leveraging the success of gabor - filter based models for single neuron brain signals , for a given image , a vector of features is extracted by 2-d wavelet filters .this feature vector has been used to build encoding models for fmri brain signals in kay et al .( ) and naselaris et al .( ) .invoked by clips of videos / movies , fmri signals from three subjects are collected with the same experimental set - up . to model fmri signals invoked by movies ,a 3-dim motion - energy gabor filter bank has been built in nishimoto et al .( ) to extract a feature vector of dimension of 26k .linear models are then built on these features at the observed time point and lagged time points . at present sparse linear regression models are favorites of the gallant lab through lasso or -l2boost .these sparse models give similar prediction performance on validation data as neural nets and kernel machines on image - fmri data ; they correspond well to the neuroscience knowledge on v1 ; and they are easier to interpret than neural net and kernel machine models that include all features or variables . for each subject , following a rigorous protocol in the gallant lab , the movie data ( how many frames per second ? ) consists of three batches : training , test and validation .the training data is used to fit a sparse encoding model via lasso or e - l2boost and the test data is used to select the smoothing parameter by cv .these data are averages of two or three replicates .that is , the same movie is played to one subject two or three times and the resulted fmri signals are called replicates .then a completed encoding determined model is used to predict the fmri signals in the validation data ( with 10 replicates ) and the prediction performance is measured by the correlation between the predicted fmri signals and observed fmri signals , for each voxel and for each subject .good prediction performance is observed for such encoding models ( cf .figure [ fig : scatter - hist ] ) .prediction and movie reconstruction are good steps to validate the encoding model in order to understand the human visual pathway .but the science lies in finding the features that might drive a voxel , or to use tukey s words , finding out `` how things really are . ''it is often the case that the number of data units is easily different from what is in collected data .there are some hard resource constraints such as that human subjects can not lie inside an fmri machine for too long and it also costs money to use the fmri machine . but whether the data collected is for 2 hours as in the data or 1 hours 50 min or 2 hours and 10 min is a judgement call by the experimenter given the constraints .consequently , scientific conclusions , or in our case , candidates for driving features , should be stable relative to removing a small proportion of data units , which is one form of reasonable or appropriate data perturbation , or reproducible without a small proportion of the data units . with a smaller set of data ,a more conservative scientific conclusion is often reached , which is deemed worthwhile for the sake of more reliable results .statistics is not the only field that uses mathematics to describe phenomena in the natural world .other such fields include numerical analysis , dynamical systems and pde and ode .concepts of stability are central in all of them , implying the importance of stability in quantitative methods or models when applied to real world problems .the necessity for a procedure to be robust to data perturbation is a very natural idea , easily explainable to a child .data perturbation has had a long history in statistics , and it has at least three main forms : jacknife , sub - sampling and bootstrap .huber ( ) writes in john w. tukeys contribution to robust statistics : `` [ tukey ] preferred to rely on the actual batch of data at hand rather than on a hypothetical underlying population of which it might be a sample '' ( p. 1643 ) .all three main forms of data perturbation rely on an `` actual batch of data '' even though their theoretical analyses do assume hypothetical underlying populations of which data is a sample .they all have had long histories .jacknife can be traced back at least to quenouille ( ) where jacknife was used to estimate the bias of an estimator .tukey ( ) , an abstract in the annals of mathematical statistics , has been regarded as a key development because of his use of jacknife for variance estimation .miller ( ) is an excellent early review on jacknife with extensions to regression and time series situations .hinkley ( ) proposes weighted jacknife for unbalanced data for which wu ( ) provides a theoretical study .knsch ( ) develops jacknife further for time series .sub - sampling on the other hand was started three years earlier than jacknife by mahalanobis ( ) .hartigan ( ) buids a framework for confidence interval estimation based on subsampling .carlstein ( ) applies subsampling ( which he called subseries ) to the time series context .politis and romano ( ) study subsampling for general weakly dependent processes .cross - validation ( cv ) has a more recent start in allen ( ) and stone ( ) .it gives an estimated prediction error that can be used to select a particular model in a class of models or along a path of regularized models .it has been wildly popular for modern data problems , especially for high - dimensional data and machine learning methods .hall ( ) and li ( ) are examples of theoretical analyses of cv .( ) bootstrap is widely used and it can be viewed as simplified jacknife or subsampling .examples of early theoretical studies of bootstrap are bickel and freedman ( ) and beran ( ) for the i.i.d .case , and knsch ( ) for time series .much more on these three data perturbation schemes can be found in books , for example , by efron and tibshirani ( ) , shao and tu ( ) and politis , romano and wolf ( ) .if we look into the literature of probability theory , the mathematical foundation of statistics , we see 5 that a perturbation argument is central to limiting law results such as the central limit theorem ( clt ) .the clt has been the bedrock for classical statistical theory .one proof of the clt that is composed of two steps and is well exposited in terence tao s lecture notes available at his website ( tao , ) .given a normalized sum of i.i.d .random variables , the first step proves the universality of a limiting law through a perturbation argument or the lindebergs swapping trick . that is , one proves that a perturbation in the ( normalized ) sum by a random variable with matching first and second moments does not change the ( normalized ) sum distribution .the second step finds the limit law by way of solving an ode.=-1 recent generalizations to obtain other universal limiting distributions can be found in chatterjee ( ) for wigner law under non - gaussian assumptions and in suidan ( ) for last passage percolation .it is not hard to see that the cornerstone of theoretical high - dimensional statistics , concentration results , also assumes stability - type conditions . in learning theory ,stability is closely related to good generalization performance ( devroye and wagner , , kearns and ron , , bousquet and elisseeff , , kutin and niyogi , , mukherjee et al ., , shalev - shwartz et al . , ) . to further our discussion on stability , we would like to explain what we mean by statistical stability .we say statistical stability holds if statistical conclusions are robust or stable to appropriate perturbations to data .that is , statistical stability is well defined relative to a particular aim and a particular perturbation to data ( or model ) .for example , aim could be estimation , prediction or limiting law . it is not difficult to have statisticians to agree on what are appropriate data perturbations when data units are i.i.d . or exchangeable in general , in which case subsampling or bootstrap are appropriate. when data units are dependent , transformations of the original data are necessary to arrive at modified data that are close to i.i.d . or exchangeable , such as in parametric bootstrap in linear models or block - bootstrap in time series .when subsampling is carried out , the reduced sample size in the subsample does have an effect on the detectable difference , say between treatment and control .if the difference size is large , this reduction on sample size would be negligible .when the difference size is small , we might not detect the difference with a reduced sample size , leading to a more conservative scientific conclusion .because of the utmost importance of reproducibility for science , i believe that this conservatism is acceptable and may even be desirable in the current scientific environment of over - claims .for the fmri problem , let us recall that for each voxel , lasso or e - l2boost is used to fit the mean function in the encoding model with cv to choose the smoothing parameter .different model selection criteria have been known to be unstable .breiman ( ) compares predictive stability among forward selection , two versions of garotte and ridge and their stability increases in that order .he goes on to propose averaging unstable estimators over different perturbed data sets in order to stabilize unstable estimators .such estimators are prediction driven , however , and they are not sparse and thereby not suitable for interpretation . in place of bootstrap for prediction error estimation as in efron ( ) , zhang ( ) uses multi - fold cross - validation while shao ( ) uses m out of n bootstrap samples with . they then select models with this estimated prediction error , and provide theoretical results for low dimensional or fixed p linear models .heuristically , the m out of n bootstrap in shao ( ) is needed because the model selection procedure is a discrete ( or set ) valued estimator for the true model predictor set and hence non - smooth ( cf .bickel , gtze , and van zwet , ) .the lasso ( tibshirani , ) is a modern model selection method for linear regression and very popular in high - dimensions : where is the response vector and is the design matrix .that is , there are data units and predictors . for each , there is a unique norm for its solution that we can use to index the solution as where cross - validation ( cv ) is used most of the time to select or , but lasso is unstable relative to bootstrap or subsampling perturbations when predictors are correlated ( cf .meinshausen and bhlmann , , bach , ) . using bootstrap in a different manner than shao ( ) , bach ( ) proposes bolasso to improve lasso s model selection consistency property by taking the smallest intersecting model of selected models over different bootstrap samples .for particular smoothing parameter sequences , the bolasso selector is shown by bach ( ) to be model selection consistent for the low dimensional case without the irrepresentable condition needed for lasso ( cf .meinshausen and bhlmann , , zhao and yu , ; zou , ; wainwright , ) .meinshausen and bhlmann ( ) also weaken the irrepresentable condition for model selection consistency of a stability selection criterion built on top of lasso .they bring perturbations to a lasso path through a random scalar vector in the lasso penalty , resulting in many random lasso paths .a threshold parameter is needed to distinguish important features based on these random paths .they do not consider the problem of selecting one smoothing parameter value for lasso as in lim and yu ( ) .we would like to seek a specific model along the lasso path to interpret and hence selects a specific or .it is well known that cv does not provide a good interpretable model because lasso is unstable .lim and yu ( ) propose a stability - based criterion that is termed estimation stability ( es ) .they use the cross - validation data perturbation scheme .that is , data units are randomly partitioned into blocks of pseudo data sets of size or subsamples where is the floor function or the largest integer that is smaller than or equal to . ]given a smoothing parameter , a lasso estimate is obtained for the block . since the norm is a meaningful quantity to line up the different estimates , lim and yu ( ) to line up the different solutions , but not a good idea to use the ratio of and its maximum value for each pseudo data set .] use it , denoted as below , to line up these estimates to form an estimate for the mean regression function and an approximate delete- jacknife estimator for the variance of : the last expression is only an approximate delete- jacknife variance estimator unless when all the subsamples of size are used . define the ( estimation )statistical stability measure as where . for nonlinear regression functions ,es can still be applied if we take an average of the estimated regression functions .note that es aims at estimation stability , while cv aims at prediction stability .in fact , es is the reciprocal of a test statistic for testing since is a test statistic for , is also a test statistic . is a scaled version of the reciprocal . to combat the high noise situation where eswould not have a well - defined minimum , lim and yu ( ) combine es with cv to propose the _ es - cv selection criterion _ for smoothing parameter : _ choose the largest that minimizes es and is smaller or equal to the cv selection ._ es - cv is applicable to smoothing parameter selection in lasso , and other regularization methods such as tikhonov or ridge regularization ( see , for example , tikhonov , , markovich , , hoerl , , hoerl and kennard , ) .es - cv is well suited for parallel computation as cv and incurs only a negligible computation overhead because are already computed for cv .moreover , simulation studies in lim and yu ( ) indicate that , when compared with lasso , es - cv applied to lasso gains dramatically in terms of false discovery rate while it loses only somewhat in terms of true discovery rate .the features or predictors in the movie - fmri problem are 3-d gabor wavelet filters , and each of them is characterized by a ( discretized ) spatial location on the image , a ( discretized ) frequency of the filter , a ( discretized ) orientation of the filter , and 4 ( discrete ) time - lags on the corresponding image that the 2-d filter is acting on . for the results comparing cv and es - cv in figure [ fig : feature - locations ] , we have a sample size and use a reduced set of features or predictors , corresponding to a coarser set of filter frequencies than what is used in nishimoto et al .( ) with predictors .[ fig : scatter - hist ] we apply both cv and es - cv to select the smoothing parameters in lasso ( or e - l2boost ) . for three voxels ( and a particular subject ) , for the simplicity of display , we show the locations of the selected features ( regardless of their frequencies , orientations and time - lags ) in figure [ fig : feature - locations ] . for these three voxels ,es - cv maintains almost the same prediction correlation performances as cv ( 0.70 vs. 0.72 ) while es - cv selects many fewer and more concentrated locations than cv .figure [ fig : scatter - hist ] shows the comparison results across 2088 voxels in the visual cortex that are selected for their high snrs .it is composed of four sub - plots .the upper two plots compare prediction correlation performance of the models built via lasso with cv and es - cv on validation data . for each model fitted on training data and each voxel , predicted responses over the validation data are calculated .its correlation with the observed response vector is the `` prediction correlation '' displayed in figure [ fig : scatter - hist ] .the lower two plots compare the sparsity properties of the models or model size . because of the definition of es - cv, it is expected that the es - cv model are always smaller than or equal to the cv model .the sparsity advantage of es - cv is apparent with a huge overall reduction of 60% on the number of selected features and a minimum loss of overall prediction accuracy by only 1.3% .the average size of the es - cv models is 24.3 predictors , while that for the cv models is 58.8 predictors ; the average prediction correlation performance of the es - cv models is 0.499 , while that for the cv models is 0.506 .robust statistics also deals with stability , relative to model perturbation . in the preface of his book `` robust statistics , '' huber ( ) states : `` primarily , we are concerned with _ distributional robustness _ : the shape of the true underlying distribution deviates slightly from the assumed model . ''hampel , rousseeuw , ronchetti and stahel ( ) write : `` overall , and in analogy with , for example , the stability aspects of differential equations or of numerical computations , robustness theories can be viewed as stability theories of statistical inference '' ( p. 8) .tukey ( ) has generally been regarded as the first paper on robust statistics .fundamental contributions were made by huber ( ) on m - estimation of location parameters , hampel ( ) on `` break - down '' point and influence curve .further important contributions can be found , for example , in andrews et al .( ) and bickel ( ) on one - step huber estimator , and in portnoy ( ) for m - estimation in the dependent case.=-1 for most statisticians , robust statistics in linear regression is associated with studying estimation problems when the errors have heavier tail distributions than the gaussian distribution . in the fmri problem , we fit mean functions with an l2 loss .what if the `` errors '' have heavier tails than gaussian tails ? for the l1 loss is commonly used in robust statistics to deal with heavier tail errors in regression , we may wonder whether the l1 loss would add more stability to the fmri problem .in fact , for high - dimensional data such as in our fmri problem , removing some data units could severely change the outcomes of our model because of feature dependence .this phenomenon is also seen in simulated data from linear models with gaussian errors in high - dimensions ._ how does sample to sample variability interact with heavy tail errors in high - dimensions ? _ in our recent work el karoui et al .( ) , we seek insights into this question through analytical work .we are able to see interactions between sample variability and double - exponential tail errors in a high - dimensional linear regression model .that is , let us assume the following linear regression model where an m - estimator with respect to loss function is given as we consider the random - matrix high - dimensional regime : due to rotation invariance , wlog , we can assume and .we cite below a result from el karoui et al .( ) for the important special case of : [ res1 ] under the aforementioned assumptions , let , then is distributed as where , and as and .denote where and independent of , and let .\ ] ] then satisfies the following system of equations together with some nonnegative : ^\prime\bigr\ } & = & 1 - \kappa , \\e \bigl\ { \bigl [ \hat{z}_\varepsilon- \operatorname{prox}_c ( \hat{z}_\varepsilon ) \bigr]^2\bigr\ } & = & \kappa r_\rho^2 ( \kappa ) .\end{aligned}\ ] ] in our limiting result , the norm of an m - estimator stabilizes .it is most interesting to mention that in the proof a `` leave - one - out '' trick is used both row - wise and column - wise such that one by one rows are deleted and similarly columns are deleted . the estimators with deletionsare then compared to the estimator with no deletion .this is in effect a perturbation argument and reminiscent of the `` swapping trick '' for proving the clt as discussed before .our analytical derivations involve prox functions , which are reminiscent of the second step in proving normality in the clt .this is because a prox function is a form of derivative , and not dissimilar to the derivative appearing in the ode derivation of the analytical form of the limiting distribution ( e.g normal distribution ) in the clt . in the case of i.i.d .double - exponential errors , el karoui et al .( ) numerically solve the two equations in result [ res1 ] to show that when , loss fitting ( ols ) is better than loss fitting ( lad ) in terms of mse or variance .they also show that the numerical results match very well with simulation or monte carlo results . at a high level, we may view that holds the key to this interesting phenomenon .being a weighted convolution of and , it embeds the interaction between sample variability ( expressed in ) and error variability ( expressed in ) and this interaction is captured in the optimal loss function ( cf .el karoui et al . , ) .in other words , acts more like double exponential when the influence of standard normal in is not dominant ( or when or so as we discover when we solve the equations ) and in this case , the optimal loss function is closer to lad loss . in cases when , it acts more like gaussian noise , leading to the better performance of ols ( because the optimal loss is closer to ls ) .moreover , for double exponential errors , the m - estimator lad is an mle and we are in a high - dimensional situation .it is well - known that mle does not work in high - dimensions .remedies have been found through penalized mle where a bias is introduced to reduce variance and consequently reduce the mse .in contrast , when , the better estimator ols is also unbiased , but has a smaller variance nevertheless .the variance reduction is achieved through a better loss function ls than the lad and because of a concentration of quadratic forms of the design matrix .this concentration does not hold for fixed orthogonal designs , however . a follow - up work ( bean et al . , )addresses the question of obtaining the optimal loss function .it is current research regarding the performance of estimators from penalized ols and penalized lad when the error distribution is double - exponential .preliminary results indicate that similar phenomena occur in non - sparse cases .furthermore , simulations with design matrix from an fmri experiment and double - exponential error show the same phenomenon , that is , when or so , ols is better than lad. this provides some insurance for using l2 loss function in the fmri project .it is worth noting that el karoui et al .( ) contains results for more general settings .in this paper , we cover three problems facing statisticians at the 21st century : figuring out how vision works with fmri data , developing a smoothing parameter selection method for lasso , and connecting perturbation in the case of high - dimensional data with classical robust statistics through analytical work .these three problems are tied together by stability .stability is well defined if we describe the data perturbation scheme for which stability is desirable , and such schemes include bootstrap , subsampling , and cross - validation .moreover , we briefly review results in the probability literature to explain that stability is driving limiting results such as the central limit theorem , which is a foundation for classical asymptotic statistics . using these three problems as backdrop , we make four points .firstly , statistical stability considerations can effectively aid the pursuit for interpretable and reliable scientific models , especially in high - dimensions .stability in a broad sense includes replication , repeatability , and different data perturbation schemes .secondly , stability is a general principle on which to build statistical methods for different purposes .thirdly , the meaning of stability needs articulation in high - dimensions because it could be brought about by sample variability and/or heavy tails in the errors of a linear regression model .last but not least , emphasis should be placed on the stability aspects of statistical inference and conclusions , in the referee process of scientific and applied statistics papers and in our current statistics curriculum .statistical stability in the age of massive data is an important area for research and action because high - dimensions provide ample opportunities for instability to reveal itself to challenge reproducibility of scientific findings .as we began this article with words of tukey , it seems fitting to end also with his words : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` what of the future ?the future of data analysis can involve great progress , the overcoming of real difficulties , and the provision of a great service to all fields of science and technology .will it ? that remains to us , to our willingness to take up the rocky road of real problems in preferences to the smooth road of unreal assumptions , arbitrary criteria , and abstract results without real attachments . who is for the challenge ? '' tukey ( p. 64 , ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _this paper is based on the 2012 tukey lecture of the bernoulli society delivered by the author at the 8th world congress of probability and statistics in istanbul on july 9 , 2012 . for their scientific influence and friendship ,the author is indebted to her teachers / mentors / colleagues , the late professor lucien le cam , professor terry speed , the late professor leo breiman , professor peter bickel , and professor peter bhlmann .this paper is invited for the special issue of _ bernoulli _ commemorating the 300th anniversary of the publication of jakob bernoulli s ars conjectandi in 1712 .the author would like to thank yuval benjamini for his help on generating the results in the figures .she would also like to thank two referees for their detailed and insightful comments , and yoav benjamini and victoria stodden for helpful discussions .partial supports are gratefully acknowledged by nsf grants ses-0835531 ( cdi ) and dms-11 - 07000 , aro grant w911nf-11 - 1 - 0114 , and the nsf science and technology center on science of information through grant ccf-0939370 . | reproducibility is imperative for any scientific discovery . more often than not , modern scientific findings rely on statistical analysis of high - dimensional data . at a minimum , reproducibility manifests itself in stability of statistical results relative to `` reasonable '' perturbations to data and to the model used . jacknife , bootstrap , and cross - validation are based on perturbations to data , while robust statistics methods deal with perturbations to models . in this article , a case is made for the importance of stability in statistics . firstly , we motivate the necessity of stability for interpretable and reliable encoding models from brain fmri signals . secondly , we find strong evidence in the literature to demonstrate the central role of stability in statistical inference , such as sensitivity analysis and effect detection . thirdly , a smoothing parameter selector based on estimation stability ( es ) , es - cv , is proposed for lasso , in order to bring stability to bear on cross - validation ( cv ) . es - cv is then utilized in the encoding models to reduce the number of predictors by 60% with almost no loss ( 1.3% ) of prediction performance across over 2,000 voxels . last , a novel `` stability '' argument is seen to drive new results that shed light on the intriguing interactions between sample to sample variability and heavier tail error distribution ( e.g. , double - exponential ) in high - dimensional regression models with predictors and independent samples . in particular , when and the error distribution is double - exponential , the ordinary least squares ( ols ) is a better estimator than the least absolute deviation ( lad ) estimator . |
the construction of useful confidence sets is one of the more challenging problems in nonparametric function estimation .there are two main interrelated issues which need to be considered together , coverage probability and the expected size of the confidence set . for a fixed parameter spaceit is often possible to construct confidence sets which have guaranteed coverage probability over the parameter space while controlling the maximum expected size . however such minimax statements are often thought to be too conservative , and a more natural goal is to have the expected size of the confidence set reflect in some sense the difficulty of estimating the particular underlying function .these issues are well illustrated by considering confidence intervals for the value of a function at a fixed point .let be an observation from the white noise model where is standard brownian motion and belongs to some parameter space .suppose that we wish to construct a confidence interval for at some point .let be a confidence interval for based on observing the process , and let denote the length of the confidence interval .the minimax point of view can then be expressed by the following : subject to the constraint on the coverage probability , minimize the maximum expected length . as an example it is common to consider the lipschitz classes 0<\beta\le1 } \bigr\},\ ] ] where is the largest integer less than and .for these classes it easily follows from results of , and that the minimax expected length of confidence intervals , which have guaranteed coverage of over , is of order .it should , however , be stressed that confidence intervals which achieve such an expected length rely on the knowledge of the particular smoothness parameters and , which are not known in most applications .unfortunately , and have shown that the natural goal of constructing an adaptive confidence interval which has a given coverage probability and has expected length that is simultaneously close to these minimax expected lengths for a range of smoothness parameters is not in general attainable .more specifically suppose that a confidence interval has guaranteed coverage probability of over .then for any in the interior of the expected length for this must also be of order .in other words the minimax rate describes the actual rate for all functions in the class other than those on the boundary of the set .for example , in the case that a confidence interval has guaranteed coverage probability of over the lipschitz class , then even if the underlying function has two derivatives , and the first derivative smaller than , the confidence interval for _ must _ still have expected length of order even though one would hope that an adaptive confidence interval would have a much shorter length of order . despite these very negative results there are some settings where some degree of adaptation has been shown to be possible . in particular under certain shape constraints constructed confidence bands which have a guaranteed coverage probability of at least over the collection of all monotone densities and which have maximum expected length of order for those monotone densities which are in for a particular choice of where .this construction relies on the selection of a tuning parameter and is thus not adaptive .dmbgen ( ) , however , does provide adaptive confidence bands with optimal rates for both isotonic and convex functions under supremum norm loss on arbitrary compact subintervals .these results are , however , still framed in terms of the maximum length over particular large parameter spaces , and the existence of such intervals raises the question of exactly how much adaption is possible .it is this question that is the focus of the present paper . rather than considering the maximum expected length over large collections of functions , we study the problem of adaptation to each and every function in the parameter space .we examine this problem in detail for two commonly used collections of functions that have shape constraints , namely the collection of convex functions and the collection of monotone functions .we focus on these parameter spaces as it is for such shape constrained problems for which there is some hope for adaptation . within this contextwe consider the problem of constructing a confidence interval for the value of a function at a fixed point under both the white noise with drift model given in ( [ w.model ] ) as well as a nonparametric regression model .we show that within the class of convex functions and the class of monotone functions , it is indeed possible to _ adapt to each individual function _ , and not just to the minimax expected length over different parameter spaces in a collection .the notion of adaptivity to a single function is also discussed in lepski , mammen and spokoiny ( ) and for the related point estimation problem but in these contexts a logarithmic penalty of the noise level must be paid , and thus the notion of adaptivity is somewhat different.=1 this result is achieved in two steps .first we study the problem of minimizing the expected length of a confidence interval , assuming that the data is generated from a particular function in the parameter space , subject to the constraint that the confidence interval has guaranteed coverage probability over the entire parameter space .the solution to this problem gives a benchmark for the expected length which depends on the function considered .it gives a bound on the expected length of any adaptive interval because if the expected length is smaller than this bound for _ any _ particular function , the confidence interval can not have the desired coverage probability . in applicationsit is more useful to express the benchmark in terms of a local modulus of continuity , an analytic quantity that can be easily calculated for individual functions . in situations where adaptation is not possible ,this local modulus of continuity does not vary significantly from function to function .such is the case in the settings considered in .however , in the context of convex or monotone functions , the resulting benchmark does vary significantly , and this opens up the possibility for adaptation in those settings .our second step is to actually construct adaptive confidence intervals .this is done separately for monotone functions and convex functions , with similar results .for example , an adaptive confidence interval is constructed which is shown to have expected length uniformly within an absolute constant factor of the benchmark for every convex function , while maintaining coverage probability over the collection of all convex functions . in other words ,this confidence interval has smallest expected length , up to a universal constant factor , for each and every convex function within the class of all confidence intervals which guarantee a coverage probability over all convex functions .a similar result is established for a confidence interval designed for monotone functions .the rest of the paper is organized as follows . in section [ lowerbound ] the benchmark for the expected length at each monotone function or each convex functionis established under the constraint that the interval has a given level of coverage probability over the collection of monotone functions or the collection of convex functions .section [ procedure.sec ] constructs data driven confidence intervals for both monotone functions and convex functions and shows that these confidence intervals maintain coverage probability and have expected length within an absolute constant factor of the benchmark given in section [ lowerbound ] for each monotone function and convex function .section [ regression.sec ] considers the nonparametric regression model , and section [ discussion.sec ] discusses connections of our results with other work in the literature .proofs are given in section [ proof.sec ] .as mentioned in the , the focus in this paper is the construction of confidence intervals which have expected length that adapts to the unknown function .the evaluation of these procedures depends on lower bounds which are given here in terms of a local modulus of continuity first introduced by in the context of point estimation of convex functions under mean squared error loss .these lower bounds provide a natural benchmark for our problems .we focus in this paper on estimating the function at since estimation at other points away from the boundary is similar . for a given function class , write for the collection of all confidence intervals which cover with guaranteed coverage probability of for all functions in . for a given confidence interval , denote by the length of and the expected length of at a given function .the minimum expected length at of all confidence intervals with guaranteed coverage probability of over is then given by=-1 =0 a natural goal is to construct a confidence interval with expected length close to the minimum for every while maintaining the coverage probability over . however although is a natural benchmark for the expected length of confidence intervals , it is not easy to evaluate exactly . instead as a first step toward our goal , we provide a lower bound for the benchmark in terms of a local modulus of continuity introduced by . the local modulus is a quantity that is more easily computable and techniques for its analysis are similar to those given in and where a global modulus of continuity was introduced in the study of minimax theory for estimating linear functionals .see the examples in section [ example.sec ] . for a parameter space and function ,the local modulus of continuity is defined by where is the function norm .the following theorem gives a lower bound for the minimum expected length in terms of the local modulus of continuity . in this theorem and throughout the paper we write for the cumulative distribution function and for the density function of a standard normal density and set .[ el.bound.thm ] suppose is a nonempty convex set .let and .then for confidence intervals based on ( [ w.model ] ) , in particular , the lower bounds given in theorem [ el.bound.thm ] can be viewed as benchmarks for the evaluation of the expected length of confidence intervals when the true function is for confidence intervals which have guaranteed coverage probability over all of .the bound depends on the underlying true function as well as the parameter space .the bounds from theorem [ el.bound.thm ] are general . in some settingsthey can be used to rule out the possibility of adaptation , whereas in other settings they provide bounds on how much adaptation is possible . in particularthe result ruling out adaptation over lipschitz classes mentioned in the easily follows from this theorem .for example , consider the lipschitz class and suppose that is in the interior of .straightforward calculations similar to those given in section [ example.sec ] show that now consider two lipschitz classes and with .a fully adaptive confidence interval in this setting would have guaranteed coverage of over and maximum expected length over of order for and .however , it follows from theorem [ el.bound.thm ] and ( [ lip.modulus ] ) that for all confidence intervals with coverage probability of over , for every with , for some constant not depending on . in particularthis holds for all and hence therefore it is not possible to have confidence intervals with adaptive expected length over two lipschitz classes with different smoothness parameters . in the present paper theorem [ el.bound.thm ]will be used to provide benchmarks in the setting of shape constraints .denote by and , respectively , the collection of all monotonically nondecreasing functions and the collection of all convex functions on ] has guaranteed coverage probability of at least over all monotonically nondecreasing functions .now set and define the confidence interval to be the properties of this confidence interval can then be analyzed in the same way as before and can be shown to be similar to those for the white noise model .in particular , the following result holds .[ mel_reg.thm ] let .the confidence interval defined in ( [ reg.mci ] ) has coverage probability of at least for all monotone functions and satisfies for all monotonically nondecreasing functions , where is a constant depending on only . as in the monotone case , set .for define the local average estimators we should note that this indexing scheme is the reverse of that given for the white noise with drift process . here estimators with small values of have smaller bias and larger variance than those with larger values of . as in the white noise model it is easy to check that has nonnegative bias .it is also important to introduce an estimate which has a similar variance but is guaranteed to have nonpositive bias .the key step is to introduce as an estimate of the bias of .the following lemma gives the required properties of and .[ reg.bias.lem ] for any convex function , from ( [ reg.bias.bound2 ] ) it is clear that the biases of the estimators are nonnegative and monotonically nondecreasing . in addition straightforward calculations using both ( [ reg.bias.bound1 ] ) and ( [ reg.bias.bound2 ] ) show that the estimators have a nonpositive and monotonically nonincreasing biases .simple calculations show that the variance of is .it then follows that ] unless is near to the boundary .more specifically for any we can consider estimators and where for monotone functions and where for convex functions .the basic theory is the same as before . for monotonically nondecreasing functions ,the confidence interval is replaced by \ ] ] and the choice of is given by where .the final confidence interval is defined by for convex functions , the confidence interval is replaced by ,\ ] ] and is chosen to be define the final confidence interval by the modulus of continuity defined in ( [ l.modulus ] ) is replaced by the earlier analysis then yields and where we now have finally we should note that at the boundary the construction of a confidence interval must be unbounded .for example any honest confidence interval for must be of the form ; otherwise it can not have guaranteed coverage probability .we prove the main results in this section .we shall omit the proofs for theorems [ mel_reg.thm ] and [ cel_reg.thm ] as they are analogous to those for the corresponding results in the white noise model .set .now note that is convex in for all .hence is also convex with . for set , and it follows that .equation ( [ con.bias.eq ] ) follows from the fact that for , and equation ( [ con.eq ] ) follows from the fact that . for any convex function ,let .then is convex , increasing in and .convexity of yields that for , note that and so is equivalent to which is the same as now note that for and , and consequently then ( [ aaaaa ] ) follows by taking and and then summing over .suppose that where it is known that ] and which minimizes the expected length when is given by .\ ] ] now for , let .let be this collection of .now for the process there is a sufficient statistic given by note that has a normal distribution or more specifically . for monotone functions, we have where is a standard normal random variable . because and , we have for convex functions , let .it follows from lemma [ con.lem ] that , and hence we have note that because both and are independent of for , and the event depends only on for , then by proposition [ m.cij.prop ] we have we now turn to the upper bound for the expected length .note that for , , and so we have it follows from that , and hence we have thus set for .then it is easy to see that thus the right - hand side is increasing in . through numerical calculations, we can see that , for , thus , by equation ( [ b8 ] ) , we have note that if , then and hence there is a such that we have either or . if , let and if , let then we have if , let then we have it then follows that and so to bound the coverage probability note that \\[-8pt ] \nonumber & & { } + \sum_{k = -2}^{2 } p\bigl(f(0 ) \notin \mathrm{ci}_{j_*^c + k}\bigr).\end{aligned}\ ] ] it then follows from equation ( [ too.early ] ) that for all .it is easy to verify directly that for all , .furthermore , it is easy to see that for , and so hence , we now turn to the upper bound for the expected length .note that hence \\[-8pt ] \nonumber & & { } \times\biggl(p\bigl ( \hat j \le j_*^c\bigr ) + \sum _ { k=1}^{\infty } 2^{k/2 } p\bigl(\hat j = j_*^c + k\bigr ) \biggr).\end{aligned}\ ] ] set for .then it is easy to see that it then follows from ( [ too.far ] ) that the right - hand side is clearly increasing in .direct numerical calculations show that for , it then follows directly from ( [ el ] ) that note that if , then , and hence there is a satisfying such that , where .let be defined by there is also a as in the proof of lemma 5 in our other paper with for which if , then let , and then we have it then follows that and so | adaptive confidence intervals for regression functions are constructed under shape constraints of monotonicity and convexity . a natural benchmark is established for the minimum expected length of confidence intervals at a given function in terms of an analytic quantity , the local modulus of continuity . this bound depends not only on the function but also the assumed function class . these benchmarks show that the constructed confidence intervals have near minimum expected length for each individual function , while maintaining a given coverage probability for functions within the class . such adaptivity is much stronger than adaptive minimaxity over a collection of large parameter spaces . , |
recent studies have shown that the power spectrum of the stock market fluctuations is inversely proportional to the frequency on some power , which points to self - similarity in time for processes underlying the market .our knowledge of the random and/or deterministic character of those processes is however limited .one rigorous way to sort out the noise from the deterministic components is to examine in details correlations at scales through the so called master equation , i.e. the fokker - planck equation ( and the subsequent langevin equation ) for the probability distribution function ( ) of signal increments .this theoretical approach , so called solving the inverse problem , based on statistical principles , is often the first step in sorting out the model(s ) . in this paperwe derive fpe , directly from the experimental data of two financial indices and two exchange rates series , in terms of a drift and a diffusion coefficient .we would like to emphasize that the method is model independent .the technique allows examination of long and short time scales _ on the same footing_. the so found analytical form of both drift and diffusion coefficients has a simple physical interpretation , reflecting the influence of the deterministic and random forces on the examined market dynamics processes . placed into a langevin equation, they could allow for some first step forecasting .we consider the daily closing price of two major financial indices , nikkei 225 for japan and nasdaq composite for us , and daily exchange rates involving currencies of japan , us and europe , / and / from january 1 , 1985 to may 31 , 2002 . data series of nikkei 225 ( 4282 data points ) and nasdaq composite ( 4395 data points ) and are downloaded from the yahoo web site ( ) .the exchange rates of / and / are downloaded from and both consists of 4401 data points each .data are plotted in fig .1(a - d ) .the / case was studied in for the 1992 - 1993 years .see also [ 6 ] , [ 8 - 10 ] and [ 11 ] for some related work and results on such time series signals , some on high frequency data , and for different time spans ./ and ( d ) / exchange rates for the period from jan . 01 , 1985 till may 31 , 2002,title="fig : " ] / and ( d ) / exchange rates for the period from jan . 01 , 1985 till may 31 , 2002,title="fig : " ] / and ( d ) / exchange rates for the period from jan .01 , 1985 till may 31 , 2002,title="fig : " ] / and ( d ) / exchange rates for the period from jan .01 , 1985 till may 31 , 2002,title="fig : " ]to examine the fluctuations of the time series at different time delays ( or time lags ) we study the distribution of the increments .therefore , we can analyze the fluctuations at long and short time scales on the same footing .results for the probability distribution functions ( pdf ) are plotted in fig .2(a - d ) . note that while the pdf of one day time delays ( circles ) for all time series studied have similar shapes , the pdf for longer time delays shows fat tails as in of the same type for nikkei 225 , / and / , but is different from the pdf for nasdaq . of ( a ) nikkei 225 , ( b ) nasdaq , ( c ) / and ( d ) / from jan .01 , 1985 till may 31 , 2002 for different delay times .each pdf is displaced vertically to enhance the tail behavior ; symbols and the time lags are in the insets .the discretisation step of the histogram is ( a ) 200 , ( b ) 27 , ( c ) 0.1 and ( d ) 0.008 respectively , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq , ( c ) / and ( d ) / from jan . 01 , 1985 till may 31 , 2002 for different delay times .each pdf is displaced vertically to enhance the tail behavior ; symbols and the time lags are in the insets .the discretisation step of the histogram is ( a ) 200 , ( b ) 27 , ( c ) 0.1 and ( d ) 0.008 respectively , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq , ( c ) / and ( d ) / from jan . 01 , 1985 till may 31 , 2002 for different delay times .each pdf is displaced vertically to enhance the tail behavior ; symbols and the time lags are in the insets .the discretisation step of the histogram is ( a ) 200 , ( b ) 27 , ( c ) 0.1 and ( d ) 0.008 respectively , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq , ( c ) / and ( d ) / from jan . 01 , 1985 till may 31 , 2002 for different delay times .each pdf is displaced vertically to enhance the tail behavior ; symbols and the time lags are in the insets .the discretisation step of the histogram is ( a ) 200 , ( b ) 27 , ( c ) 0.1 and ( d ) 0.008 respectively , title="fig : " ] more information about the correlations present in the time series is given by joint pdf s , that depend on variables , i.e. .we started to address this issue by determining the properties of the joint pdf for , i.e. . the symmetrically tilted character of the joint pdf contour levels ( fig .3(a - c ) ) around an inertia axis with slope 1/2 points out to the statistical dependence , i.e. a correlation , between the increments in all examined time series . of ( a ) nikkei 225 , ( b ) nasdaq closing price signal and ( c ) / and ( d ) / exchange rates for and .contour levels correspond to from center to border , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq closing price signal and ( c ) / and ( d ) / exchange rates for and .contour levels correspond to from center to border , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq closing price signal and ( c ) / and ( d ) / exchange rates for and .contour levels correspond to from center to border , title="fig : " ] of ( a ) nikkei 225 , ( b ) nasdaq closing price signal and ( c ) / and ( d ) / exchange rates for and .contour levels correspond to from center to border , title="fig : " ] the conditional probability function is for . for any ,the chapman - kolmogorov equation is a necessary condition of a markov process , one without memory but governed by probabilistic conditions the chapman - kolmogorov equation when formulated in form yields a master equation , which can take the form of a fokker - p1anck equation . for , (\delta x,\tau )\label{efp}\ ] ] in terms of a drift (, ) and a diffusion coefficient (, ) ( thus values of represent , ) .the coefficient functional dependence can be estimated directly from the moments ( known as kramers - moyal coefficients ) of the conditional probability distributions : for .the functional dependence of the drift and diffusion coefficients and for the normalized increments is well represented by a line and a parabola , respectively .the values of the polynomial coefficients are summarized in table 1 and fig .4 . and for the pdf evolution equation ( 3 ) ; is normalized with respect to the value of the standard deviation of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) / and ( g , h ) / exchange rates , title="fig : " ] and for the pdf evolution equation ( 3 ) ; is normalized with respect to the value of the standard deviation of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) / and ( g , h ) / exchange rates , title="fig : " ] and for the pdf evolution equation ( 3 ) ; is normalized with respect to the value of the standard deviation of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) / and ( g , h ) / exchange rates , title="fig : " ] and for the pdf evolution equation ( 3 ) ; is normalized with respect to the value of the standard deviation of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) / and ( g , h ) / exchange rates , title="fig : " ] and for the pdf evolution equation ( 3 ) ; is normalized with respect to the value of the standard deviation of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) / and ( g , h ) / exchange rates , title="fig : " ] and for the pdf evolution equation ( 3 ) ; is normalized with respect to the value of the standard deviation of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) / and ( g , h ) / exchange rates , title="fig : " ] and for the pdf evolution equation ( 3 ) ; is normalized with respect to the value of the standard deviation of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) / and ( g , h ) / exchange rates , title="fig : " ] and for the pdf evolution equation ( 3 ) ; is normalized with respect to the value of the standard deviation of the pdf increments at delay time 32 days : ( a , b ) nikkei 225 and ( c , d ) nasdaq closing price signal , ( e , f ) / and ( g , h ) / exchange rates , title="fig : " ] for two values of , days , day for nasdaq .contour levels correspond to =-0.5,-1.0,-1.5,-2.0,-2.5 from center to border ; data ( solid line ) and solution of the chapman kolmogorov equation integration ( dotted line ) ; ( b ) and ( c ) data ( circles ) and solution of the chapman kolmogorov equation integration ( plusses ) for the corresponding pdf at = -50 and + 50,title="fig : " ] for two values of , days , day for nasdaq .contour levels correspond to =-0.5,-1.0,-1.5,-2.0,-2.5 from center to border ; data ( solid line ) and solution of the chapman kolmogorov equation integration ( dotted line ) ; ( b ) and ( c ) data ( circles ) and solution of the chapman kolmogorov equation integration ( plusses ) for the corresponding pdf at = -50 and + 50,title="fig : " ] for two values of , days , day for nasdaq .contour levels correspond to =-0.5,-1.0,-1.5,-2.0,-2.5 from center to border ; data ( solid line ) and solution of the chapman kolmogorov equation integration ( dotted line ) ; ( b ) and ( c ) data ( circles ) and solution of the chapman kolmogorov equation integration ( plusses ) for the corresponding pdf at = -50 and + 50,title="fig : " ] .values of the polynomial coefficients defining the linear and quadratic dependence of the drift and diffusion coefficients and for the fpe ( 3 ) of the normalized data series ; represents the normalization constant equal to the standard deviation of the =32 days pdf [ cols="<,^,^,^,^,^,^,^",options="header " , ] [ tab1 ] the leading coefficient ( ) of the linear dependence has approximately the same values for all studied signals , thus the same deterministic noise ( drift coefficient ) .note that the leading term ( ) of the functional dependence of diffusion coefficient of the nasdaq closing price signal is about twice the leading , i.e. second order coefficient , of the other three series of interest .this can be interpreted as if the stochastic component ( diffusion coefficient ) of the dynamics of nasdaq is twice larger than the stochastic components of nikkei 225 , / and / .a possible reason for such a behavior may be related to the transaction procedure on the nasdaq. our numerical result agrees with that of ref . if a factor of ten is corrected in the latter ref . for . the validity of the chapman - kolmogorov equation has also been verified .a comparison of the directly evaluated conditional pdf with the numerical integration result ( 2 ) indicates that both pdf s are statistically identical .the more pronounced peak for the nasdaq is recovered ( see fig .an analytical form for the pdf s has been obtained by other authors but with models different from more classical ones .the present study of the evolution of japan and us stock as well as foreign currency exchange markets has allowed us to point out the existence of deterministic and stochastic influences .our results confirm those for high frequency ( 1 year long ) data .the markovian nature of the process governing the pdf evolution is confirmed for such long range data as in for high frequency data .we found that the stochastic component ( expressed through the diffusion coefficient ) for nasdaq is substantially larger ( twice ) than for nikkei 225 , / and / .this could be attributed to the electronic nature of executing transactions on nasdaq , therefore to different stochastic forces for the market dynamics .silva ac , yakovenko vm ( 2002 ) comparison between the probability distribution of returns in the heston model and empirical data for stock indices cond- mat/ 0211050 ; dragulescu aa , yakovenko vm ( 2002 ) probability distribution of returns in the heston model with stochastic volatility , cond - mat/0203046 | the evolution of the probability distributions of japan and us major market indices , nikkei 225 and nasdaq composite index , and / and / currency exchange rates is described by means of the fokker - planck equation ( fpe ) . in order to distinguish and quantify the deterministic and random influences on these financial time series we perform a statistical analysis of their increments distribution functions for different time lags . from the probability distribution functions at various , the fokker - planck equation for is explicitly derived . it is written in terms of a drift and a diffusion coefficient . the kramers - moyal coefficients , are estimated and found to have a simple analytical form , thus leading to a simple physical interpretation for both drift and diffusion coefficients . the markov nature of the indices and exchange rates is shown and an apparent difference in the nasdaq is pointed out . * key words . * econophysics ; probability distribution functions ; fokker - planck equation ; stock market indices ; currency exchange rates |
many algorithms , such as spectral clustering , kernel principal component analysis or more generally kernel - based methods , are based on estimating eigenvalues and eigenvectors of integral operators defined by a kernel function , from a given random sample . to set the context from a statistical point of view ,let be an unknown probability distribution on a compact space and let be a kernel on .the goal is to estimate the integral operator from an i.i.d .random sample drawn according to + a first study on the relationship between the spectral properties of a kernel matrix and the corresponding integral operator can be found in for the case of a symmetric square integrable kernel they prove that the ordered spectrum of the kernel matrix converges to the ordered spectrum of the kernel integral operator .connections between this empirical matrix and its continuous counterpart have been subject of much research , for example in the framework of kernel - pca ( e.g. , , ) and spectral clustering ( e.g. ) . in , the authors study the connection between the spectral properties of the empirical kernel matrix and those of the corresponding integral operator by introducing two extension operators on the ( same ) reproducing kernel hilbert space defined by , that have the same spectrum ( and related eigenfunctions ) as and respectively .in such a way they overcome the difficulty of dealing with objects ( and ) operating in different spaces . +the integral operator is related to the gram operator where denotes the reproducing kernel hilbert space defined by the kernel and the corresponding feature map .+ the main objective of this paper is to estimate gram operators on ( infinite - dimensional ) hilbert spaces .some bounds on the deviation of the empirical gram operator from the true gram operator in separable hilbert spaces can be found in in the case of gaussian random vectors .+ let us introduce some notation .we denote by a separable hilbert space and by a ( possibly unknown ) probability distribution on . remark that with the above notation our goal is to estimate the gram operator defined as from an i.i.d . sample drawn according to our approach consists in first considering the finite - dimensional setting where is a random vector in and then in generalizing the results to the infinite - dimensional case of separable hilbert spaces . to be able to go from finite to infinite dimensionwe will establish dimension - free inequalities . to be more precise , we consider the related problem of estimating the quadratic form which rewrites explicitly as in the finite - dimensional setting we construct an estimator of the quadratic form and we provide non - asymptotic dimension - free bounds for the approximation error that hold under weak moment assumptions .observe that in the finite - dimensional case the quadratic form can be seen as the quadratic form associated to the gram matrix observe also that the study of the gram matrix is of interest in the case of a non - centered criterion and it coincides , in the case of centered data ( i.e. =0 ] denotes the trace of the gram matrix , and \zeta _ * ( \max \ { t , \sigma \ } ) \leq \sqrt{n } \\+ \infty & \text { otherwise . } \end{cases}\ ] ] with probability at least , for any , .\ ] ] for the proof we refer to section [ proof_prop2 ] .remark that equation of the proof provides a bound for any choice of the parameter and that we reported here only the numerical value of the bound when for the sake of simplicity .+ assuming any reasonable bound on the sample size we can bound the logarithmic factor hidden in with a relatively small constant . in particular , if we choose , we get .observe that the bound does not depend explicitly on the dimension of the ambient space .more specifically , the dimension has been replaced by the entropy term moreover , we do not need to know the exact values of and to compute the estimator and evaluate the bound , it is sufficient to know upper bounds instead .indeed , if we use those upper bounds to define our estimator , the above result is still true with and replaced by their upper bounds .+ we also observe that in order to have a meaningful ( finite ) bound we can choose the threshold such that so that for any , assuming that .more precisely , using the inequality , we see that equation holds when with this choice the threshold decays to zero at speed as the sample size grows to infinity .remark that the estimator is not necessarily a quadratic form .we conclude this section by introducing and studying a quadratic estimator of + we observe that proposition [ prop0 ] provides a confidence region for .define where we recall that and is defined in equation . according to proposition [ prop0 ] , with probability at least , for any , from a theoretical point of view we can consider as an estimator of any quadratic form belonging to the confidence interval ] be a threshold such that .next proposition provides the analogous for the quadratic form of the dimension - free bound presented in proposition [ prop2 ] for .[ prop3 ] with the same notation as in proposition [ prop2 ] , with probability at least for any , since for any there is such that we have let us put we observe that , with probability at least where and are defined in proposition [ prop0 ] and depend on only through .indeed , in the event of probability at least described in equation , since equation also holds for , and in the same way we get which proves equation . according to corollary [ lem1a ] in section [ appx ] we conclude the proof . according to equation , for any where is such that is still some improvement to bring , since at this stage we are not sure that is non - negative .decomposing into its positive and negative parts and writing , we can consider as an estimator of the positive semi - definite symmetric matrix as shown in the following proposition .[ prop4 ] with probability at least , for any , where is defined in proposition [ prop2 ] .let us put as before for any , there is such that , so that , according to equation , then we deduce that therefore , for any , combining the above equation with proposition [ prop3 ] we conclude the proof .in this section we extend the results obtained in the previous section to the infinite - dimensional setting .+ let be a separable hilbert space and let be an unknown probability distribution on we consider the gram operator defined by and we assume where denotes a random vector of law in analogy to the previous section we denote by the quadratic form associated with the gram operator so that we consider an increasing sequence of subspaces of of finite dimension such that and we endow each space with the probability which arises from the disintegration of , meaning that , if is the orthogonal projector on , .we denote by the quadratic form in associated with the probability distribution and we observe that , for any , we have in the following we consider i.i.d .samples of size in drawn according to according to proposition [ prop0 ] , the event is such that . since , by the continuity of measure , this means that with probability at least , for any and any , consequently , since , for any , the following result holds .[ prop_hspace ] with probability at least , for any , where if we do not want to go to the limit , we can use the explicit bound this bound depends on .we will see in the following another bound that goes uniformly to zero for any when tends to infinity . in the same way , proceeding as already done in the previous section we state the analogous of proposition [ prop2 ] .[ prop1.40fr ] let be known constants and put where .define and consider , according to equation , the estimator for any , define by choosing any accumulation point of the sequence .define the bound where 0 , s_4 ^ 2] ] is such that and is defined by the condition using the explicit expression of the first and second derivative of , we get and + \frac{1 + 2 \sqrt{2}}{2 } . ] performing a taylor expansion at we obtain that since for .finally we observe that , for any , let us now show that for , we have already seen that the inequality is satisfied since moreover we observe that the function is such that and also performing a taylor expansion at we get since for any ] and reminding that , we conclude the proof . as a consequence ,choosing and , we get \ , \mathrm{d } \pi_{\theta}(\theta ' ) , \end{gathered}\ ] ] where + we observe that the above inequality allows to compare with the expectation with respect to the gaussian perturbation . in terms of the empirical criterion we have proved that \ , \mathrm{d } \pi_{\theta}(\theta ' ) .\end{gathered}\ ] ] we are now ready to use the following general purpose pac - bayesian inequality .[ pac]let be a prior probability distribution on and let ] , we get ^ 2 } { 1 - \mu - \gamma - 2 \tau(\theta ) } + \bigl [ 1 - \mu - \tau(\theta ) \bigr ] \frac{\gamma + \tau(\theta)}{1 - \mu - \gamma - 2 \tau(\theta ) } - \gamma - \tau(\theta ) = 0 , \end{gathered}\ ] ] which implies therefore , since according to equation , for any ] , and therefore , according to equation , where + since condition can also be written as , it implies that the second order polynomial has two roots and that is on the right of its largest root , which is larger than . on the other hand , we observe that , under condition , putting , we get }{1 + \mu - \gamma + \widehat{\tau}(\theta ) } + \gamma + \widehat{\tau}(\theta ) = 0.\ ] ] therefore , when condition is satisfied , which rewrites as equation .we first observe that , according to proposition [ prop1 ] , with probability at least , = \inf_{(\lambda , \beta ) \in \lambda } b_{\lambda , \beta } \left [ \|\theta\|^{-2}\widetilde n_{\lambda}(\theta ) \right]\end{aligned}\ ] ] since , by definition , are the values which minimize . ] . + as equation is trivial when we may assume that , so that in particular , by considering only the second term in the definition of we obtain that which implies therefore , since ,\ ] ] there exists for which we recall that by equation and we observe that defined as are the desired values which optimize we also remark that , by equation , thus , evaluating in we obtain that \frac{\lambda _ * } { \lambda_{\widehat{\jmath } } } \\\shoveright { + \sqrt{\frac{(2+c ) \kappa^{1/2 } s_4 ^ 2}{2 n \max \{ t , \sigma \ } } } \left ( \frac{\beta_*}{\beta_{\widehat{\jmath } } } + \frac{\beta_{\widehat{\jmath}}}{\beta_*}\right ) } \\\leq \sqrt{\frac{2(\kappa-1)}{n } \left[\frac{(2 + 3c ) s_4 ^ 2}{4 ( 2 + c ) k^{1/2 } \max \ { t , \sigma \ } } + \log(k / \epsilon ) \right ] } \cosh \biggl [ \log\left ( \frac{\lambda_{\widehat{\jmath}}}{\lambda _ * } \right ) \biggr ] \\ + \sqrt{\frac{2 ( 2+c ) \kappa^{1/2 } s_4 ^ 2}{n \max \ { t , \sigma\ } } } \cosh \biggl [ \log\left ( \frac{\beta_{\widehat{\jmath}}}{\beta_*}\right ) \biggr].\end{gathered}\ ] ] by equation we get } \cosh \left ( \frac{a}{4 } \right )\\ + \sqrt{\frac{2 ( 2+c ) \kappa^{1/2 } s_4 ^ 2}{n \max \ { t , \sigma \ } } } \cosh \left ( \frac{a}{2}\right ) .\end{gathered}\ ] ] we also observe that \leq 4n^{-1/2 } \zeta(t),\ ] ] since by definition in the same way , observing that we obtain \gamma_{\widehat{\jmath } } + 4 \delta_{\widehat{\jmath } } \lambda_{\widehat{\jmath } } / \max \ { t , \sigma\ } \\ & \leq \bigl[6 + ( \kappa-1)^{-1 } \bigr]n^{-1/2 } \zeta ( \max \ { t , \sigma\})\end{aligned}\ ] ] and similarly , & \leq 6n^{-1/2 } \zeta ( \max \ { t , \sigma\ } ) ,\\ 2 \bigl [ 2\gamma_{\widehat{\jmath } } + \delta_{\widehat{\jmath } } \lambda_{\widehat{\jmath } } / \max \ { t , \sigma \ } \bigr ] & \leq 4 n^{-1/2 } \zeta ( \max \ { t , \sigma\}).\end{aligned}\ ] ] this implies that , whenever , then we have then proved that applying the above lemma to equation and observing that , for any , \leq s_4 ^ 2,\ ] ] we obtain that , with probability at least , for any , .\ ] ] since by the cauchy - schwarz inequality we get choosing and computing explicitly the constants we conclude the proof .we observe that it is sufficient to prove that with probability at least where and indeed , if equation holds true , according to corollary [ lem1a ] , which is the analogous , in the infinite - dimensional setting , of proposition [ prop3 ] .thus , following the proof of proposition [ prop4 ] we obtain the desired bounds .+ let us now prove equation .observe that , for any , where is the closest point in to .since , with probability at least , for any , .\ ] ] let us now remark that for any ] be such that where is defined in proposition [ prop2 ] .[ lem0a ] the function where is defined in proposition [ prop2 ] , is non - decreasing for any if , then , so that is obviously non - decreasing .otherwise , so that therefore the function is of the form where , , and the constants , and are positive .let and observe that and that .therefore for any , and showing that is non - decreasing .[ lem2a ] for any such that , for any , and any threshold such that and , we have by symmetry of and , equation is a consequence of equation . + * step 1 .* we will prove that where is defined in equation . + * case 1 . *assume that and remark that , since is non - decreasing and , according to equation , where is defined in equation . therefore in this case , but when , because is a non - increasing function of , thus equation implies that since , equation holds true .+ * case 2 .* assume now that we are not in * case 1 * , implying that in this case \end{gathered}\ ] ] according to equation .moreover , continuing the above chain of inequalities \\ = \phi_+ \bigl ( \max \ { b- \eta , \sigma \ } \bigr ) \bigl [ 1 - b_{\lambda , \beta}(a + \eta ) \bigr ] \\\quad \geq \max \ { b - \eta , \sigma \ } \frac{1 - b_{\lambda , \beta}(a + \eta ) } { 1 + b_{\lambda , \beta}(\max\ { b- \eta , \sigma \ } ) } \\\geq \max \ { b-\eta , \sigma \ } \frac { 1 - b_{\lambda , \beta}(a + \eta ) } { 1 + b_{\lambda , \beta}(a+\eta)}. \end{gathered}\ ] ] therefore according to lemma [ lem1.14 ] .this concludes the proof of * step 1*. + * step 2 * taking the infimum in in equation , according to equation , we obtain that we can then use the fact that is non - decreasing ( proved in lemma [ lem0a ] ) to deduce that since there is nothing to prove when already .remark that and that in the same way .this proves that by symmetry , we can then exchange and to prove the same bounds for , and therefore also for the absolute value of this quantity , which ends the proof of the lemma . as a consequence the following result holds .[ lem1a ] for any such that , for any , and any threshold such that and , we have this is a consequence of the previous lemma , of the fact that , and of the fact that .the main goal of section [ sec1 ] is to estimate the gram matrix , where is a random vector of unknown law , from an i.i.d .sample drawn according to we have constructed a robust estimator of the gram matrix and in section [ sec_emp ] we have shown empirically its performance in the case of a gaussian mixture distribution . in this section we show from a theoretical point of view that the classical empirical estimator behaves similarly to our robust estimator in light tail situations , while it may perform worse otherwise .+ as already done in section [ sec1 ] , we consider the quadratic form and we denote by the quadratic form associated to the empirical gram matrix .according to the notation introduced in section [ sec1 ] , let and let where and .let us put and let us introduce where is defined in equation as at the end of the section we mention some assumptions under which it is possible to give a non - random bound for .the following proposition , compared with the result obtained for the robust estimator , presented in proposition [ prop2 ] , shows that the different behavior of the two estimators and can appear only for heavy tail data distributions .[ prop2.8eg ] consider any threshold such that .with probability at least , for any , + \bigl [ 1 - b _ * \bigl ( n(\theta ) \bigr ) \bigr]_+}.\ ] ] where is defined in proposition [ prop2 ] . before proving the above propositionwe observe that , also in this case , the bound does not depend explicitly on the dimension of the ambient space and thus the result can be extended to any infinite - dimensional hilbert space .let be the finite set defined in equation .we use as a tool the family of estimators introduced in equation , where is defined in equation .let us put we divide the proof into 4 steps .+ * step 1 . *the first step consists in linking the empirical estimator with + we claim that , with probability at least for any , any such that , +^{-1},\ ] ] with the convention that moreover , with probability at least for any , any , such that , we first observe that , according to the definition of , for any threshold , \leq r \bigl ( \lambda^{1/2 } \widetilde{n}_{\lambda}(\theta)^{-1/2 } \theta \bigr ) = r_{\lambda } \bigl ( \widehat{\alpha}(\theta ) \ , \theta \bigr ) \leq 0,\ ] ] where we have used the fact that the function , introduce in equation , is non - decreasing .moreover as soon as and this holds true , according to proposition [ prop0 ] , with probability at least , for any and any such that . indeed , by proposition [ prop0 ] , with probability at least , for any , any defining we get .\end{gathered}\ ] ] in the same way , with probability at least , for any , any such that , we obtain .\ ] ] we remark that the derivative of is \\ \displaystyle \frac{\frac{z^2}{2}}{1+z+\frac{z^2}{2 } } & \text{if } \ z \in [ -1,0]\\ \displaystyle\frac{\frac{z^2}{2}}{1-z+\frac{z^2}{2 } } & \text{if } \ z \in [ 0,1 ] , \end{cases}\ ] ] showing that , and therefore that is a non - decreasing function satisfying applying equation to equation we obtain where we have used the fact that since , by the cauchy - schwarz inequality , we get which proves the first inequality .similarly , since in non - decreasing , we obtain that , with probability at least , for any , any such that , where the last inequality follows from equation . +* this is an intermediate step .we claim that , with probability at least , for any , any , any , +^{-1 } \\ \max \bigl\ { \bar{n}(\theta ) , \sigma \bigr\ } & \leq \phi_-^{-1 } \bigl ( \max \ { n(\theta ) , \sigma \ } \bigr)\biggl [ 1 - \tau_{\lambda } \bigl ( \bar{n}(\theta ) \bigl [ 1 - \tau_{\lambda}(\sigma ) \bigr]_+ \bigr ) \biggr]_+^{-1 } \\ \bar{n}(\theta ) & \geq \biggl ( 1 - \frac{\lambda^2}{3 } \biggr)_+ \phi_+\bigl ( n(\theta ) \bigr ) \end{aligned}\ ] ] where and are defined in proposition [ prop0 ] .+ we consider the threshold where we have used the fact that , by definition , for any we assume that we are in the intersection of the two events of proposition [ prop0 ] , which holds true with probability at least so that according to * step 1 * , choosing as a threshold we get +^{-1},\end{aligned}\ ] ] ( where is still defined with respect to ) , so that , according to equation , +^{-1}.\ ] ] as a consequence , recalling the definition of we have +^{-1}.\ ] ] thus , observing that +^{-1},\ ] ] we obtain the first inequality . to prove the second inequality , we use equation once to see that +\geq \bar{n}(\theta ) \bigl [ 1 - \tau_{\lambda } ( \sigma ) \bigr]_+,\ ] ] and we use it again to get +^{-1}\\ & \leq \phi_-^{-1 } \bigl ( \max \ { n(\theta ) , \sigma \ } \bigr ) \bigl [ 1 - \tau_{\lambda } \bigl ( \bar{n}(\theta ) [ 1 - \tau_{\lambda}(\sigma ) ] _ + \bigr ) \bigr]_+^{-1}.\end{aligned}\ ] ] to complete the proof of the second inequality , it is enough to remark that + \bigr ) \bigr]_+^{-1}.\ ] ] to prove the last inequality , it is sufficient to remark that by proposition [ prop0 ] and hence , when on the other hand , when this inequality is also obviously satisfied . + * step 3 . * we now prove that , with probability at least , for any , any , any , + \bigl [ 1 - b_{\lambda , \beta } \bigl ( n(\theta ) \bigr ) \bigr]_+ } , \\ 1 - \frac{\max \ { \bar{n}(\theta ) , \sigma \ } } { \max \ { n(\theta ) , \sigma \ } } & \leq b_{\lambda , \beta } \bigl(n(\theta ) \bigr ) + \frac{\lambda^2}{3},\end{aligned}\ ] ] where is defined in equation and in equation .+ we observe that , according to * step 2 * , + ^{-1 } \\ & \leq \frac{\max \ { n(\theta ) , \sigma \ } } { \bigl [ 1 - \tau_{\lambda } \bigl ( n(\theta ) \bigr ) \bigr]_+ \bigl [ 1 -b_{\lambda , \beta } \bigl ( n(\theta )\bigr ) \bigr]_+ } , \end{aligned}\ ] ] which implies + } + \frac { \tau_{\lambda } \bigl ( n(\theta ) \bigr)}{\bigl [ 1 - \tau_{\lambda } \bigl ( n(\theta ) \bigr ) \bigr]_+ \bigl [ 1 -b_{\lambda , \beta } \bigl ( n(\theta )\bigr ) \bigr]_+ } .\ ] ] applying lemma [ lem1.14 ] we obtain the first inequality .+ to prove the second inequality we observe that , using again * step 2 * , ^{-1},\end{aligned}\ ] ] where we have used the fact that as shown in equation .thus we conclude that + * step 4 . * from * step 3 * we deduce that + \bigl [ 1 -b_{\lambda , \beta } \bigl ( n(\theta )\bigr ) \bigr]_+ } .\ ] ] to conclude the proof it is sufficient to apply * step 3 * to defined in equation . indeed , by equation , for any and , by equation, we have we conclude this section by mentioning assumptions under which it is possible to give a non - random bound for defined in equation .let us assume that , for some exponent and some positive constants and , \leq 1.\ ] ] in this case , with probability at least , where we recall that . ] .define the estimator as [ props1 ] let ,s_4 ^ 2] ] with ) ] by an upper bound . + we observe in particular that ^{1/2}}{\kappa^{1/2 } } \leq \frac { \mathbb e \bigl [ \mathbf{tr}(a^2 ) \bigr]}{\kappa^{1/2 } \mathbb e \bigl [ \lvert a \rvert_{\infty}^2 \bigr]^{1/2 } } \leq \mathbb e \bigl [ \mathbf{tr}(a ) \bigr ] = \mathbf{tr } \bigl [ \mathbb{e } ( a ) \bigr].\ ] ] indeed , to see the first inequality it is sufficient to observe that . moreover we have that \leq \mathbb e \bigl [ \lvert a \rvert_{\infty } \mathbf{tr}(a ) \bigr ] \leq \mathbb e \bigl [ \lvert a \rvert_{\infty}^2 \bigr]^{1/2 } \mathbb e \bigl [ \mathbf{tr}(a)^2 \bigr]^{1/2},\ ] ] and , denoting by an orthonormal basis of = \sum _ { \substack { 1 \leq i \leq d , \\ 1 \leq j \leq d } } \mathbb e \bigl [ \lvert a^{1/2 } e_i \rvert^2 \lvert a^{1/2 } e_j \rvert^2 \bigr ] \\ \leq \sum _ { \substack { 1 \leq i \leq d , \\ 1 \leq j \leq d } } \mathbb e \bigl [ \lvert a^{1/2 } e_i \rvert^4 \bigr]^{1/2 } \mathbb e \bigl [ \lvert a^{1/2 } e_j \rvert^4 \bigr]^{1/2 } \\\leq \kappa \sum _ { \substack { 1 \leq i \leq d , \\ 1 \leq j \leq d } } \mathbb e \bigl [ \lvert a^{1/2 } e_i \rvert^2 \bigr ] \mathbb e \bigl [ \lvert a^{1/2 } e_j \rvert^2 \bigr ] = \kappa \mathbb e \bigl [ \mathbf{tr}(a ) \bigr]^2 . \end{gathered}\ ] ] this implies that we can bound in proposition [ props1 ] by } { t } + \log(k ) + \log(\epsilon^{-1 } ) \biggr ) } + \sqrt { \frac{98.5\ \kappa \ \mathbb e [ \mathbf{tr}(a ) ] } { t}}.\ ] ] we conclude this section observing that , since the entropy terms are dominated by ] let be a random vector distributed according to the unknown probability measure the covariance matrix of is defined by ) ( x- \mathbb e[x])^{\top } \right]\ ] ] and our goal is to estimate , uniformly in the quadratic form \rangle^2 \right ] , \quad \theta \in \mathbb r^d,\ ] ] from an i.i.d .sample drawn according to we can not use the results we have proved for the gram matrix , since the quadratic form depends on the unknown quantity ] in order to estimate but it is sufficient to observe that the quadratic form can be written as \ ] ] where is an independent copy of more generally , given we may consider independent copies of and the random matrix so that = \mathbb e \bigl [ \ , \theta^{\top } \!\ ! a \, \theta \ , \bigr].\ ] ] we will discuss later how to choose .in the following we use a robust block estimate which consists in dividing the sample in blocks of size and then in considering the original sample as a `` new '' sample of symmetric matrices ( of independent copies of ) defined as that thus correspond to the empirical covariance estimates on each block .we can use the results of the previous section to define a robust estimator of .+ let us introduce } { \mathbb e \bigl [ \lvert a^{1/2 } \theta \rvert^2 \bigr]^2 } , \\ \text{and } \kappa & = \sup_{\substack { \theta \in \mathbb{r}^d\\\mathbb e \bigl [ \langle \theta , x - \mathbb e ( x ) \rangle^2 \bigr]>0 } } \frac{\mathbb e \bigl [ \langle \theta , x - \mathbb e ( x ) \rangle^4 \bigr]}{\mathbb e \bigl [ \langle \theta , x - \mathbb e ( x ) \rangle^2 \bigr]^2}. \end{aligned}\ ] ] replacing with ] .it holds that = \mathbb e \biggl [ \biggl ( \frac{1}{q(q-1 ) } \sum_{1 \leq j< k \leq q } \langle \theta , x^{(j ) } - x^{(k ) } \rangle^2 \biggr)^2 \biggr ] \\ = \frac{1}{q^2(q-1)^2 } \sum_{\substack { 1 \leq j < k \leq q\\ 1 \leq s < t \leq q } } \mathbb e \bigl [ \langle \theta , x^{(j ) } - x^{(k ) } \rangle^2 \langle \theta , x^{(s ) } - x^{(t ) } \rangle^2 \bigr]\end{gathered}\ ] ] recalling the definition of covariance , we have = \frac{1}{q^2(q-1)^2 } \biggl\ { \sum_{\substack { 1 \leq j < k \leq q\\ 1 \leq s < t \leq q } } \mathbb e \bigl [ \langle \theta , x^{(j ) } - x^{(k ) } \rangle^2 \bigr ] \mathbb e \bigl [ \langle \theta , x^{(s ) } - x^{(t ) } \rangle^2 \bigr ] \\ + \sum_{1 \leq j< k \leq q } \mathbb e \bigl [ \langle \theta , x^{(j ) } - x^{(k ) } \rangle^4 \bigr ] - \mathbb e \bigl [ \langle \theta , x^{(j ) } - x^{(k ) } \rangle^2 \bigr]^2 \\ + \sum_{\substack { 1 \leq j < k \leq q\\ 1 \leq s < t\leq q\\ \lvert \ { j , k \ }\cap \{s , t \ } \rvert = 1 } } \biggl ( \mathbb e \bigl [ \langle \theta , x^{(j ) } - x^{(k ) } \rangle^2 \langle \theta , x^{(s ) } - x^{(t ) } \rangle^2 \bigr ] \\ - \mathbb e \bigl [ \langle \theta , x^{(j ) } - x^{(k ) } \rangle^2 \bigr ] \mathbb e \bigl [ \langle \theta , x^{(s ) } - x^{(t ) } \rangle^2 \bigr ] \biggr ) \biggr\ } \\= \frac{1}{4 } \mathbb e \bigl [ \langle \theta , x^{(2 ) } - x^{(1 ) } \rangle^2 \bigr]^2 + \frac{1}{2q(q-1 ) } \mathbb e \bigl [ \langle \theta , x^{(2 ) } - x^{(1 ) } \rangle^4 \bigr ] - \mathbb e \bigl [ \langle \theta , x^{(2 ) } - x^{(1 ) } \rangle^2 \bigr]^2 \\+ \frac{q-2}{q(q-1 ) } \biggl ( \mathbb e \bigl [ \langle \theta , x^{(1 ) } - x^{(2 ) } \rangle^2 \langle \theta , x^{(1 ) } - x^{(3 ) } \rangle^2 \bigr ] - \mathbb e \bigl [ \langle \theta , x^{(1 ) } - x^{(2 ) } \rangle^2 \bigr]^2 \biggr).\end{gathered}\ ] ] define and observe that ^ 2 & = 4 n(\theta)^2 , \\ \mathbb e \bigl [ ( w_1 - w_2)^4 \bigr ] & = \mathbb e \bigl [ w_1 ^ 4 \bigr ] + 6 \mathbb e \bigl [ w_1 ^ 2 \bigr ] \mathbb e \bigl [ w_2 ^ 2 \bigr ] + \mathbb e \bigl [ w_2 ^ 4 \bigr ] \\ & = 2 \mathbb e \bigl [ w_1 ^ 4 \bigr ] + 6 \mathbb e \bigl [ w_1 ^ 2 \bigr]^2 \leq ( 2 \kappa + 6 )n(\theta)^2 , \\\mathbb e \bigl[(w_1 - w_2)^2 ( w_1 - w_3)^2 \bigr ] & = \mathbb e \bigl [ w_1 ^ 4 \bigr ] + 3 \mathbb e \bigl [ w_2 ^ 2]^2 \leq ( \kappa + 3 ) n(\theta)^2.\end{aligned}\ ] ] therefore \leq \biggl ( 1 + \frac{(q-2)(\kappa-1)}{q(q-1 ) } + \frac{(\kappa + 1)}{q(q-1 ) } \biggr ) n(\theta)^2 = \biggl ( 1 + \frac{\tau_q(\kappa)}{q } \biggr ) n(\theta)^2,\ ] ] and hence , since = n(\theta) ] , as mentioned in the remarks following proposition [ props1 ] .we can improve somehow the constants by evaluating more carefully ] .[ lem2 ] it holds that & \leq \biggl ( 1 - \frac{q-2}{q(q-1 ) } \biggr ) \lvert \sigma \rvert_{\infty } n(\theta ) + \frac{1}{q } \biggl ( \kappa + \frac{1}{q-1 } \biggr ) \mathbf{tr}(\sigma ) n(\theta ) \\\mathbb e \bigl [ \mathbf{tr } \bigl ( a^2 \bigr ) \bigr ] & \leq \biggl ( 1 - \frac{q-2}{q(q-1 ) } \biggr ) \mathbf{tr } \bigl ( \sigma^2 \bigr ) + \frac{1}{q } \biggl ( \kappa + \frac{1}{q-1 } \biggr ) \mathbf{tr}(\sigma)^2.\end{aligned}\ ] ] replacing with ] . recall that \leq \kappa \mathbb e [ \| x \|^2 ] ^2 = \kappa \mathbf{tr}(\sigma)^2\ ] ] and = \mathbf{tr}(\sigma^2).$ ] we observe that = \mathbb e \biggl [ \frac{1}{q^2(q-1)^2 } \sum_{\substack{1 \leq j < k \leq q \\ 1 \leq s < t \leq q } } \langle \theta , x^{(j ) } - x^{(k ) } \rangle \langle x^{(j ) } - x^{(k ) } , x^{(s ) } - x^{(t ) } \rangle \langle x^{(s ) } - x^{(t ) } , \theta \rangle \biggr]\ ] ] and \\ & = 4 \mathbb e \bigl [ \langle \theta , x^{(1 ) } \rangle \langle x^{(1 ) } , x^{(2 ) } \rangle \langle x^{(2 ) } , \theta \rangle \bigr ] = 4 \lvert \sigma \theta \rvert^2 \leq 4 \lvert \sigma \rvert_{\infty } n(\theta ) , \\\mathbb e \bigl [ \langle \theta , x^{(1 ) } - x^{(2 ) } \rangle & \langle x^{(1 ) } - x^{(2 ) } , x^{(1 ) } - x^{(3 ) } \rangle \langle x^{(1 ) } - x^{(3 ) } , \theta \rangle \bigr ] \\ & = \mathbb e \bigl [ \langle \theta , x^{(1 ) } \rangle^2 \lvert x^{(1 ) } \rvert^2 \bigr ] + 3 \mathbb e \bigl [ \langle \theta , x^{(1 ) } \rangle \langle x^{(1 ) } , x^{(2 ) } \rangle \langle x^{(2 ) } , \theta \rangle \bigr ] \\ & \leq \kappa \mathbf{tr}(\sigma ) n(\theta ) + 3 \lvert \sigma \rvert_{\infty } n(\theta ) ,\\ \mathbb e \bigl [ \langle \theta , x^{(1 ) } - x^{(2 ) } \rangle & \langle x^{(1 ) } - x^{(2 ) } , x^{(1 ) } - x^{(2 ) } \rangle \langle x^{(1 ) } - x^{(2 ) } , \theta \rangle \bigr ] \\ & = 2 \mathbb e \bigl [ \langle \theta , x^{(1 ) } \rangle^2 \lvert x^{(1 ) } \rvert^2 \bigr ] + 2 \mathbb e \bigl [ \langle \theta , x^{(1 ) } \rangle^2 \bigr ] \mathbb e \bigl [ \lvert x^{(1 ) } \rvert^2 \bigr ] \\ & \quad + 4 \mathbb e \bigl [ \langle \theta , x^{(1 ) } \rangle \langle x^{(1 ) } , x^{(2 ) } \rangle \langle x^{(2 ) } , \theta \rangle \bigr ] \\ & \leq 2(\kappa+1 ) \mathbf{tr}(\sigma ) n(\theta ) + 4 \lvert \sigma \rvert_{\infty } n(\theta),\end{aligned}\ ] ] which proves the first inequality . in the same way , = \mathbb e \biggl [ \frac{1}{q^2(q-1)^2 } \sum_{\substack{1 \leq j < k \leq q \\ 1 \leq s < t\leq q } } \langle x^{(j ) } - x^{(k ) } , x^{(s ) } - x^{(t ) } \rangle^2 \biggr],\ ] ] and & = 4 \mathbb e \bigl [ \langle x^{(1 ) } , x^{(2 ) } \rangle^2 \bigr ] \\ & = 4 \mathbf{tr } \bigl ( \sigma^2 \bigr ) , \\ \mathbb e \bigl [ \langle x^{(1 ) } - x^{(2 ) } , x^{(1 ) } - x^{(3 ) } \rangle^2 \bigr ] & = \mathbb e \bigl [ \lvert x^{(1 ) } \rvert^4 \bigr ] + 3 \mathbb e \bigl [ \langle x^{(1 ) } , x^{(2 ) } \rangle^2 \bigr ] \\ & \leq \kappa \mathbf{tr}(\sigma)^2 + 3 \mathbf{tr } \bigl ( \sigma^2 \bigr ) , \\ \mathbb e \bigl [ \langle x^{(1 ) } - x^{(2 ) } , x^{(1 ) } - x^{(2 ) } \rangle^2 \bigr ] & = 2 \mathbb e \bigl [ \lvert x^{(1 ) } \rvert^4 \bigr ] + 2 \mathbb e \bigl [ \lvert x^{(1 ) } \rvert^2 \bigr]^2 + 4 \mathbb e \bigl [ \langle x^{(1 ) } , x^{(2 ) } \rangle^2 \bigr ] \\ & \leq 2 ( \kappa + 1 ) \mathbf{tr}(\sigma)^2 + 4 \mathbf{tr } \bigl ( \sigma^2 \bigr).,\end{aligned}\ ] ] which concludes the proof . using these tighter bounds , we can improve to }{\bigl [ \bigl ( 1 - \frac{q-2}{q(q-1 ) } \bigr ) \lvert \sigma \rvert_{\infty } + \frac{1}{q } \bigl ( \kappa + \frac{1}{q-1 } \bigr ) \mathbf{tr}(\sigma ) \bigr ] t } \\ \shoveright { + \log(k ) + \log(\epsilon^{-1 } ) \biggr ) \biggr]^{1/2 } } \\ + \sqrt { \frac{98.5 \bigl [ q \bigl ( 1 - \frac{q-2}{q(q-1 ) } \bigr ) \lvert \sigma \rvert_{\infty } + \bigl ( \kappa + \frac{1}{q-1 } \bigr ) \mathbf{tr}(\sigma ) \bigr ] } { t}}.\end{gathered}\ ] ] therefore , in the case when we have \leq \frac{1}{q}\biggl ( \kappa+1 + \frac{2}{q(q-1 ) }\biggr ) \mathbf{tr}(\sigma ) n(\theta)\ ] ] and hence , recalling that , we can take if we compare the above result with the bound obtained in proposition [ prop2 ] for the gram matrix estimator , we see that the first appearance of in the definition of has been replaced with and that the second appearance of has been replaced with thus , when , that is not a very strong hypothesis , we can take at least , and obtain an improved bound for the estimation of that is not much larger than the bound for the estimation of the centered gram matrix , that requires the knowledge of , since the difference between the two bounds is just a matter of replacing with . shawe - taylor , j. , williams , c. and cristianini , c. and kandola , j. ( 2005 ) _ on the eigenspectrum of the gram matrix and its relationship to the operator eigenspectrum_. eds . ) : alt 2002 , lnai 2533 , pages 2340 , springer - verlag zwald , l. , bousquet , o. and blanchard , g. ( 2004 ) _ statistical properties of kernel principal component analysis_. in learning theory , volume 3120 of lecture notes in comput ., pages 594608 .springer , berlin | in this paper we investigate the question of estimating the gram operator by a robust estimator from an i.i.d . sample in a separable hilbert space and we present uniform bounds that hold under weak moment assumptions . the approach consists in first obtaining non - asymptotic dimension - free bounds in finite - dimensional spaces using some pac - bayesian inequalities related to gaussian perturbations of the parameter and then in generalizing the results in a separable hilbert space . we show both from a theoretical point of view and with the help of some simulations that such a robust estimator improves the behavior of the classical empirical one in the case of heavy tail data distributions . |
the use of phase fresnel lenses ( pfl s ) offer a mechanism to achieve superb imaging of astrophysical objects in the hard x - ray and gamma - ray energy regimes . in principle, pfl s can concentrate nearly the entire incident photon flux , modulo absorption effects , with diffraction - limited imaging performance .the impact of absorption is energy and material dependent , but can be almost negligible at higher photon energies .the performance of these diffraction optics is obtained via tailoring the fresnel profile of each zone to yield the appropriate phase at the primary focal point .however , pfl s have long focal lengths and are chromatic ; the excellent imaging is available over a narrow energy range . in order to demonstrate the imaging capabilities of these optics ,we have fabricated ground - testable pfl s in silicon .pfl s are a natural extension of fresnel - diffractive optics . as opposed to fresnel zone plates ( fzp ) , where alternating half - period or fresnel zones are completely blocked , and phase - reversal zone plates , where the blocked zone in a fzp is replaced by an amount of material to retard the phase by , the entire profile of each fresnel zone in a pfl has the appropriate amount of material to achieve the proper phase shift at the primary focus . in practice , the exact profile of a pfl can be approximated by a multiple - step structure as shown in figure 1 which illustrates the first four fresnel zones of a pfl with 4 steps approximating the ideal pfl profile .the location of the radial step transitions is given by where is the location of the first fresnel ridge ( is the focal length and the photon wavelength ) and is the number of steps / fresnel ridge with a step index of .this choice leads to annuli , as defined by each step , of constant area on the pfl .each contributes equally , ignoring absorption , to the intensity at a given focal point .this configuration of the stepped - phase profile is denoted as a regular - stepped pfl . for an exact pfl profile, all the irradiance appears in the first order ( ) focus . in a stepped - profile approximation, some energy appears in negative and higher order foci . ignoring absorptive effects, the impact on the lens efficiency of approximating the exact pfl profile by a step - wise function is given by }^2 & ~~{for~ ( n-1)/p = m \in integer } \\ & = 0 & ~~{otherwise } \nonumber\end{aligned}\ ] ] where is the relative intensity into the order focus for steps in each fresnel zone .as increases , the indices with non - zero intensities of both the real and virtual higher order foci are pushed to higher with the relative intensities into these higher orders decreasing . in the limiting case where ,the profile is exact for the pfl with the relative intensity in the 1st order ( ) going to 100% ( and the indices of the higher order foci being sent to ) .more practically , a stepped - pfl with per fresnel zone has 95% efficiency focussing into the 1st order focal point , _ sans _ absorption . the material needed in a pfl to retard the phasealso will attenuate the flux of incident photons .the index of refraction of matter can be expressed as and is related to atomic scattering factors .thus for a photon of wavelength , a material of thickness will retard the phase by while attenuating the intensity by where .the attenuating effects of the material in a fresnel zone of a stepped - pfl can be calculated by determining the amplitude of the waveform traversing each step of the pfl profile .if is the material thickness of the step in a particular fresnel zone , the phase will be retarded by .as shown in figure 1 , the step retards the phase between and , and the amplitude can be expressed as where is a normalization constant and summing over all steps leads to determining the intensity at the primary focus ^ 2 + \biggl[\sum_{i=1}^p\alpha_i^i \biggr]^2\biggr)\end{aligned}\ ] ] note that circular symmetry is assumed for the pfl , and this calculation is for a single fresnel zone .if a pfl contains a total of individual fresnel zones with identical , in phase , profiles , the irradiance at the focus would be increased by as each fresnel zone has the same area on the pfl .this formulation holds for any step spacing , regular or irregular , as long as a sufficiently small scale exists where the phase thickness is effectively constant .choosing an energy of 8 kev , the efficiency of a regular - stepped pfl , including absorption , is 82.3% in silicon. if absorption is ignored , the efficiency is 95% which is exactly that as determined from equation 1 for .for a pfl with diameter , minimum fresnel ridge spacing , focusing at a photon wavelength , the focal length is given by where is in meters for in m , in cm , and in kev in equation 6 .using the representative values of m , cm , and kev ( cu k- ) , the focal length would be 800 meters which is rather long for a ground - test . at nasa goddard space flight center, a 600-meter interferometry testbed is available for testing of pfl optics .the nominal configuration has an optics station 150 meters from an x - ray source and a detector station 450 meters from the optics station . assuming the x - ray emission is isotropic within the field - of - view of the optics , the effective focal length of an optic focussing at the detector station would be meters .this sets the value of the focal length of a pfl for incorporation into this test beam configuration . using meters , cm , and kev ,this leads to a minimum fresnel ridge spacing of m which is the natural scale size for micro - fabrication in silicon .the fresnel ridge height needed to retard the phase by is given by where is the photon wavelength and is the real part of the index of refraction . for silicon , $ ] m or 20.5 m at 8 kev .the gray - scale lithographic fabrication process has been employed at the university of maryland to create pfl structures in silicon wafers .this implementation of the gray - scale process employs a lithographic mask that uses small , variable - transmission pixels ( gray levels ) that create , via projection photolithography , a designed , 3-dimensional structure in a photoresist spun on a silicon wafer .this pattern is then transferred into the silicon via deep - reactive ion etching ( drie ) .the developed ground - test pfl s have been fabricated using silicon - on - insulator ( soi ) wafers in order to minimize the thickness of the required mechanical substrate under the pfl s and thus maximize the x - ray photon transmission .the sandwiched oxide layer forms a natural etch stop to remove the silicon substrate directly under each pfl while leaving the surrounding material for mechanical stability .the unprocessed soi wafer was 100 mm in diameter with 70 m of silicon , 2 m oxide , and 500 m silicon forming the soi wafer structure .a prototype silicon pfl fabricated using the gray - scale lithographic process is shown in figure 2 . .copyright 2004 ieee.,height=226 ] ccccc pfl designation & diameter & & # of ridges & # steps / ridge + x3 & 2.99 & 24 & 32 & 16 + x4 & 2.99 & 24 & 32 & 16 + x5 & 2.99 & 24 & 32 & 8 + x6 & 4.72 & 15 & 80 & 8 + table 1 lists the four pfl designs that have been included in this ground - test fabrication .note that this pfl fabrication incorporated a design to produce , as opposed to thick fresnel optics .this was chosen for this initial fabrication to effectively double the minimum ridge spacing , p , for a set focal length , pfl diameter , and design energy .although this will increase absorption losses , the relaxation of the requirement eased the constraints on the device fabrication .the four pfl s , along with several test structures , were grouped to form a _ die _ which is compact in spatial extent .twelve of these _ dice _ in a array were fabricated on the 100 mm soi wafer via a step - and - repeat process .the goal of this fabrication was to produce a sample of pfl s for testing in the 600 m beam line , and the process was not optimized for yield . in order to identify the optimal pfl s for testing, an optical inspection rejected those with obvious defects .this rejected 15 out of the possible 48 pfl s .the remaining pfl s were scanned via an optical profilometer ( veeco , wyko nt1100 ) to determine the accuracy of the fabricated profiles .for the 3 mm diameter pfl s , the first and last 5 fresnel ridges were scanned and compared to the design profile .for the 5 mm pfl , the 5 ridges near the half radius were also scanned and compared . using an analysis similar to that presented in equation 4 , albeit ignoring absorption and using a phasor formalism, the efficiency of each scanned pfl was estimated from the profiles obtained from the profilometer measurements .note that the profiles are measured along a chosen radial path and circular symmetry was assumed .figure 3 illustrates the profile measurements and a comparison to the design profile for a 3 mm diameter pfl ( x3 ) for the regions near the center of the device ( leftmost plot ) and near the edge ( rightmost plot ) .ccccccc & & & + & center & & center & & center & + average & 75.6% & 59.0% & 72.5% & 54.8% & 68.5% & 55.3% + maximum & 77.2% & 67.5% & 80.9% & 64.8% & 74.8% & 63.4% + minimum & 69.6% & 48.0% & 52.6% & 41.1% & 59.1% & 36.5% + cccc & + & center & & + average & 61.6% & 55.1% & 32.4% + maximum & 65.2% & 61.6% & 36.1% + minimum & 35.8% & 54.8% & 21.1% + table 2 lists the maximum , minimum , and average efficiency for the different fabricated pfl s based upon the profile measurements .the values for the maxima and minima quoted for a pfl are that for a specific lens , i.e. the ensemble of measurements for a specific design were used to determine the appropriate designation .the quoted efficiencies do not take into account absorptive losses due to either the fresnel profile or the m substrate .assuming an 8 step / fresnel ridge profile and 8 kev , the reduction in collection efficiency is approximately 14% , i.e. , due to the phase - retarding material in the stepped - fresnel profile and 30% , i.e. , due to the m silicon substrate .note that the effects of attenuation can be significantly reduced by fabricating pfl s designed for higher photon energies .the data represented in table 2 demonstrate that , as indicated from profile measurements , stepped - profile pfl s micro - fabricated in silicon have efficiencies significantly larger than that for the simpler zone plates and phase - reversal zone plates .the data also illustrate that efficiencies determined from the finer pitch fresnel zones are reduced as compared to the larger pitch center fresnel zones .this is due to the fact that it is more difficult to accurately fabricate zones with higher aspect ratios , defined as the ratio of fresnel ridge height to ridge pitch .a significant contribution to this effect is due to the aspect - ratio dependence of the etching process ; it is more difficult to remove silicon from narrow ridge regions as shown in figure 3 .work has progressed on designing appropriate compensation in the lithographic mask and this technique has been demonstrated in the fabrication of a second - generation of fresnel lens structures that exhibit a much reduced aspect - ratio dependence of the pfl profiles .we have fabricated ground - test pfl s in silicon using gray - scale lithography .we have determined the imaging performance of these devices via analysis of the measured profiles of the fabricated optics .these results indicate that the efficiencies , although less than ideal , are a significant improvement over the theoretical maximum that can be obtained with zone plates and phase - reversal zone plates .we plan on introducing these devices into the 600 m test beam to demonstrate their imaging capability and verify the anticipated efficiency determination via _ in situ _ x - ray measurements .this material is based upon work supported by the national aeronautics and space administration under grant apra04 - 0000 - 0087 issued through the science mission directorate office and by goddard space flight center through the director s discretionary fund .g. skinner : astronomy and astrophysics , * 375 * , 691 ( 2001 ) g. skinner : astronomy and astrophysics , * 383 * , 352 ( 2002 ) h. dammann , optik * 31 * , 95 ( 1970 ) b.l .henke , e.m gullikson , & j.c .davis : at .data and nucl . datatables , * 54 * , 181 ( 1993 ) j. kirz , j. opt .amer . * 64 * , 301 ( 1974 ) b. morgan , c.m .waits , j. krizmanic , & r. ghodssi , jour .micro - electro - mechanical - systems ( jmems ) * 13 * , 113 ( 2004 ) b. morgan , c.m .waits , & r. ghodssi , microelectronic engineering * 77 * , 850 ( 2005 ) d. jerius , t.j .gaetz , & m. karovska , proc .spie * 5165 * , 433 ( 2004 ) | diffractive / refractive optics , such as phase fresnel lenses ( pfl s ) , offer the potential to achieve excellent imaging performance in the x - ray and gamma - ray photon regimes . in principle , the angular resolution obtained with these devices can be diffraction limited . furthermore , improvements in signal sensitivity can be achieved as virtually the entire flux incident on a lens can be concentrated onto a small detector area . in order to verify experimentally the imaging performance , we have fabricated pfl s in silicon using gray - scale lithography to produce the required fresnel profile . these devices are to be evaluated in the recently constructed 600-meter x - ray interferometry testbed at nasa / gsfc . profile measurements of the fresnel structures in fabricated pfl s have been performed and have been used to obtain initial characterization of the expected pfl imaging efficiencies . _ space research association + goddard space flight center , greenbelt , maryland 20771 usa + . of electrical and computer engineering , university of maryland , college park , maryland 20742 usa + , 9 , avenue du colonel - roche 31028 toulouse , france _ |
the joint analysis of different types of big data should provide a better understanding of the molecular mechanisms underlying cancers , , , , and . in this workwe introduce an approach that involves the use of matrix factorizations such as the singular value decomposition ( svd ) to simultaneously analyze a finite number of matrices by iteratively identifying cross - correlations between them . the iterative singular value decomposition ( isvd )jointly analyzes a finite number of data matrices by automatically identifying any correlations that may exist among the rows of matrices , or the rows and columns of any one of the matrices . since big data sets contain many distinct signals that associate the measured variables and samples , repeated application of the isvd on re - normalized data iteratively detects these signals as orthogonal correlations among the matrices or within a single matrixthis approach is computationally efficient .the following notation is used in this work .* notation : * * the linear space of real column vectors with coordinates . * the linear space of real matrices . * the subset of real orthonormal matrices . * the subset of real non - negative diagonal matrices . * the subset of real matrices whose columns are normalized .the singular value decomposition of a matrix is the decomposition of into the product of three matrices of the form where , , and .the diagonal entries of are called the singular values of , and columns of and are called left and right singular vectors of respectively .the left - singular vectors are eigenvectors of and right - singular vectors are eigenvectors of .non - zero singular values of are same as the square roots of non - zero eigenvalues of ( which are the same as the non - zero eigenvalues of ) . based on the svd , the matrix can be written as an outer product of orthogonal rank one approximations where and are the columns of and respectively ; is the diagonal element of . given a set of matrices with full column rank .the svd of stacked matrix will be given by the column of can be written as where .if represent the ( ) columns of , then can be viewed as correlation detectors of the signal that is common to .that is , we can detect the rows of that are mutually correlated with the common signal by applying thresholds to the vectors .note that the above procedure will result in distinct signals that link the rows of via the common signals .the analysis of the rows of will be based on .specifically , given the rank one approximation to the data we form the residual data and apply rank one approximation to the residual data pair .this procedure is iterated until the largest singular value of residue falls below a certain threshold that is near zero .this algorithm is based on the svd of the stacked matrix .define ( ) to be the column of matrix ( ) and be the diagonal entry of matrix in ( [ eq : svd ] ) .the first column of can be written as where .the rank one approximation to svd of is given as with we are interested to find the support of signals in .the vectors in the rank one approximation of can be viewed as detectors of the signal in , respectively .that is , the components of with large absolute magnitude correspond to the rows in that are highly correlated with .note that is common in the matrix factorization of .the indices of with large absolute magnitude are the signal support for respectively .we apply the universal threshold proposed by donoho and johnstone ( and ) to detect components of with large absolute magnitude .now we subtract from respectively and get the rank one approximation to svd of the pair .it is shown in theorem that the rank one approximation to svd of is given as with similarly to the first iteration , the signals in are detected .the sequential application of the isvd is continued until the maximum singular value of drops below a predefined threshold indicating that the noise floor of the data has been reached .the number of iterations is at most rank of .therefore , the isvd algorithm is computationally efficient and enhances signal detection by rank one approximation . to get the largest singular value and singular vector of we can use the matlab command . for further information about the algorithm of finding a few eigenvalue of a large matrixplease refer to .* theorem 1 : * let the svd of the stack matrix be then , splitting apart the first column of , we can write satisfying let which latter equality holds since the claim is that * the nonzero eigenvalues of are , * the nonzero eigenvalues of are , * with the corresponding eigenvalues , so that the above representation of is its singular value decomposition .* proof : * the proof is in the calculation and similarly , etc . the algorithm proceeds to find satisfying * theorem 2 : * let be an unit vector and be a set of orthonormal vectors .define two matrices ( ) and ( ) as follows and where the support of and in are and respectively and the support of in is .define as the stack of two matrices and it is claimed that * and are the eigenvalues of corresponding to eigenvector and respectively . * if then and are the eigenvectors of corresponding to eigenvalue . proof : by multiplying to both sides of ( [ eq1 ] ) similarly for .suppose , by multiplying to both sides of ( [ eq1 ] ) similarly for .* theorem 3 : * let be an unit vector and be a set of i.i.d .white noise vectors , in which is a vector .define the matrices ( ) as follows it is claimed that * is the approximate eigenvalues of corresponding to eigenvector for a non - trivial range of signal to noise ratio .proof : [ eq : eq1111 ] a^ta & = s+n_1 & & s+n_m & n_m+1 & & n_p s^t+n_1^t + + s^t+n_m^t + n_m+1^t + + n_p^t + & = ss^t+n_1s^t+sn_1^t+n_1n_1^t + & + & + ss^t+n_ms^t+sn_m^t+n_mn_m^t + & + n_m+1n_m+1^t++n_pn_p^t by multiplying from right hand side to both sides of ( [ eq : eq1111 ] ) [ eq : eq2222 ] a^ta = s+ it is easy to see that if s or [ eq : eq22221 ] then the right hand side of ( [ eq : eq2222 ] ) can be approximated by .it is noticeable that if , then with a high probability the inequality in ( [ eq : eq22221 ] ) is satisfied . below the convergence behaviour of as growsis studied . by taking limit from both sides of ( [ eq : eq2222 ] ) we have [ eq : eq222222 ] a^ta = s+=s by law of large number we have [ eq : eq3333 ] pr(=0)=1 from ( [ eq : eq222222 ] ) and ( [ eq : eq3333 ] ) we can have almost surely [ eq : eq4444 ] a^ta = s therefore almost surely for any given there exists an such that for all [ eq : eq5555 ] |a^ta - s| < and almost surely [ eq : eq6666 ] m(s- ) < a^tas <m(s+ ) . from chebychev s inequality [ eq : eq6666 ] p(|_i=1^mn_i|> ) = let be the weakest signal to noise ratio that thresholding strategy works , .[ eq : eq6666 ] p(|_i=1^mn_i| > ) = = what is the critical ?critical can be achieved from roc curve and donoho johnson threshold . for critical can find an upper bound on probability of error for any value of error .the maximum probability of error is a function of signal to noise ratio , because the critical is a function of signal to noise ratio and is the power of signal .the isvd algorithm is able to detect signals that are common or unique in finite number of data sets .signals are detected sequentially based on signal strength .very weak signals with small singular values are detectable since noise background is systematically reduced relative to signal strength . note that the isvd is computationally feasible in situations where the gsvd is not. it may also be theoretically more meaningful than the generalized singular value decomposition ( gsvd ) , , and . a zeinalzadeh , t wenska , g okimoto , integrated analysis of multiple high - dimensional data sets by joint rank-1 matrix approximations , 54th ieee conference on decision and control ( cdc ) , pp . 38523857 , 2015 .g okimoto , a zeinalzadeh , t wenska , m loomis and etc . ,joint analysis of multiple high - dimensional data types using sparse matrix approximations of rank-1 with applications to ovarian and liver cancer , biodata mining 9 ( 1 ) , 24 , 2016 .s. p. ponnapalli , m. a. saunders , c. f. van loan , orly alter , a higher - order generalized singular value decomposition for comparison of global mrna expression from multiple organisms , science , 6(12 ) , 2011 . | we develop an iterative version of the singular value decomposition ( isvd ) that jointly analyzes a finite number of data matrices to identify signals that correlate among the rows of matrices . it will be illustrated how the supervised analysis of a big data set by another complex , multi - dimensional phenotype using the isvd algorithm could lead to signal detection . |
adequate models of human language for syntactic analysis and semantic interpretation are typically of context - free complexity or beyond .indeed , prolog - style definite clause grammars ( dcgs ) and formalisms such as patr with feature - structures and unification have the power of turing machines to recognise arbitrary recursively enumerable sets .since recognition and analysis using such models may be computationally expensive , for applications such as speech processing in which speed is important finite - state models are often preferred .when natural language processing and speech recognition are integrated into a single system one may have the situation of a finite - state language model being used to guide speech recognition while a unification - based formalism is used for subsequent processing of the same sentences .rather than write these two grammars separately , which is likely to lead to problems in maintaining consistency , it would be preferable to derive the finite - state grammar automatically from the ( unification - based ) analysis grammar .the finite - state grammar derived in this way can not in general recognise the same language as the more powerful grammar used for analysis , but , since it is being used as a front - end or filter , one would like it not to reject any string that is accepted by the analysis grammar , so we are primarily interested in ` sound approximations ' or ` approximations from above ' .attention is restricted here to approximations of context - free grammars because context - free languages are the smallest class of formal language that can realistically be applied to the analysis of natural language .techniques such as restriction can be used to construct context - free approximations of many unification - based formalisms , so techniques for constructing finite - state approximations of context - free grammars can then be applied to these formalisms too .a ` finite - state calculus ' or ` finite automata toolkit ' is a set of programs for manipulating finite - state automata and the regular languages and transducers that they describe .standard operations include intersection , union , difference , determinisation and minimisation .recently a number of automata toolkits have been made publicly available , such as fire lite , grail , and fsa utilities .finite - state calculus has been successfully applied both to morphology and to syntax ( constraint grammar , finite - state syntax ) .the work described here used a finite - state calculus implemented by the author in sicstus prolog .the use of prolog rather than c or c++ causes large overheads in the memory and time required .however , careful account has been taken of the way prolog operates , its indexing in particular , in order to ensure that the asymptotic complexity is as good as that of the best published algorithms , with the result that for large problems the prolog implementation outperforms some of the publicly available implementations in c++ .some versions of the calculus allow transitions to be labelled with arbitrary prolog terms , including variables , a feature that proved to be very convenient for prototyping although it does not essentially alter the power of the machinery .( it is assumed that the string being tested consists of ground terms so no unification is performed , just matching . )there are two main ideas behind this algorithm .the first is to describe the finite - state approximation using formulae with regular languages and finite - state operations and to evaluate the formulae directly using the finite - state calculus .the second is to use , in intermediate stages of the calculation , additional , auxiliary symbols which do not appear in the final result .a similar approach has been used for compiling a two - level formalism for morphology . in this case the auxiliary symbols are dotted rules from the given context - free grammar .a dotted rule is a grammar rule with a dot inserted somewhere on the right - hand side , e.g. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ s np vp + s np vp + s np vp _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , since these dotted rules are to be used as terminal symbols of a regular language , it is convenient to use a more compact notation : they can be replaced by a triple made out of the nonterminal symbol on the left - hand side , an integer to determine one of the productions for that nonterminal , and an integer to denote the position of the dot on the right - hand side by counting the number of symbols to the left of the dot .so , if ` s np vp ' is the fourth production for s , the dotted rules given above may be denoted by , and , respectively .it will turn out to be convenient to use a slightly more complicated notation : when the dot is located after the last symbol on the right - hand side we use as the third element of the triple instead of the corresponding integer , so the last triple is instead of .( note that is an additional symbol , not a variable . ) moreover , for epsilon - rules , where there are no symbols on the right - hand side , we treat the as it were a real symbol and consider there to be two corresponding dotted rules , e.g. and corresponding to ` mod ' and ` mod ' for the rule ` mod ' . using these dotted rules as auxiliary symbols we can work with regular languages over the alphabet where is the set of terminal symbols, is the set of nonterminals , is the number of productions for nonterminal , and is the number of symbols on the right - hand side of the production for .it will be convenient to use the symbol as a ` wildcard ' , so means and means .( this last example explains why we use rather than ; it would otherwise not be possible to use the ` wildcard ' notation to denote concisely the set . )we can now attempt to derive an expression for the set of strings over that represent a valid parse tree for the given grammar : the tree is traversed in a top - down left - to - right fashion and the daughters of a node x expanded with the production for x are separated by the symbols .( equivalently , one can imagine the auxiliary symbols inserted in the appropriate places in the right - hand side of each production so that the grammar is then unambiguous . )consider , for example , the following grammar : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ s a s b + s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _then the following is one of the strings over that we would like to accept , corresponding to the string accepted by the grammar : our first approximation to the set of acceptable strings is , i.e. strings that start with beginning to parse an s and end with having parsed an s. from this initial approximation we subtract ( that is , we intersect with the complement of ) a series of expressions representing restrictions on the set of acceptable strings : and , respectively , while juxtaposition denotes concatenation and the bar denotes complementation ( ) . ] formula [ e1 ] expresses the restriction that a dotted rule of the form , which represents starting to parse the right - hand side of a rule , may be preceded only by nothing ( the start of the string ) or by a dotted rule that is not of the form ( which would represent the end of parsing the right - hand side of a rule ) . formula [ e2 ] similarly expresses the restriction that a dotted rule of the form may be followed only by nothing or by a dotted rule that is not of the form . for each non - epsilon - rule with dotted rules , , for each : where where is the symbol on the right - hand side of the production for .formula [ e3 ] states that the dotted rule must be followed by ( or when ) when the next item to be parsed is the terminal , or by ( starting to parse an ) when the next item is the nonterminal .for each non - epsilon - rule with dotted rules , , for each : where + = + + formula [ e4 ] similarly states that the dotted rule must be preceded by ( or when ) when the previous item was the terminal , or by when the previous item was the nonterminal . for each epsilon - rule corresponding to dotted rules and : formulae [ e5 ] and [ e6 ] state that the dotted rule must be followed by , and must be preceded by . for each non - epsilon rule with dotted rules , , for each : and where formula [ e7 ] states that the next instance of that follows must be either ( a recursive application of the same rule ) or ( the next stage in parsing the same rule ) , and there must be such an instance .formula [ e8 ] states similarly that the closest instance of that precedes must be either ( a recursive application of the same rule ) or ( the previous stage in parsing the same rule ) , and there must be such an instance .when each of these sets has been subtracted from the initial approximation we can remove the auxiliary symbols ( by applying the regular operator that replaces them with ) to give the final finite - state approximation to the context - free grammar .it may be admitted that the notation used for the dotted rules was partly motivated by the possibility of immediately testing the algorithm using the finite - state calculus in prolog : the regular expressions listed above can be evaluated directly using the ` wildcard ' capabilities of the finite - state calculus .figure [ codefig ] shows the sequence of calculations that corresponds to applying the algorithm to the following grammar : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ s a s b + s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ with the following notational explanations it should be possible to understand the code and compare it with the description of the algorithm . * the procedure ` r(re , x ) ` evaluates the regular expression ` re ` and puts the resulting ( minimised ) automaton into a register with the name ` x ` . * ` list_fsa(x ) ` prints out the transition table for the automaton in register ` x ` .* terminal symbols may be any prolog terms , so the terminal alphabet is implicit . here atoms are used for the terminal symbols of the grammar ( ` a ` and ` b ` ) and terms of the form ` _ /_/_ ` are used for the triples representing dotted rules .the terms need not be ground , so the prolog variable symbol ` _ ` is used instead of the ` wildcard ' symbol in the description of the algorithm . * in a regular expression : * * ` # x ` refers to the contents of register ` x ` ; * * ` * ) -(#l ) ` instead of ; * * ` rem(re , l ) ` denotes the result of removing from the language ` re ` all terminals that match one of the expressions in the list ` l ` .the context - free language recognised by the original context - free grammar is .the result of applying the approximation algorithm is a 3-state automaton recognising the language .applying the restrictions expressed by formulae [ e1][e6 ] gives an automaton whose size is at most a small constant multiple of the size of the input grammar .this is because these restrictions apply locally : the state that the automaton is in after reading a dotted rule is a function of that dotted rule .when restrictions [ e7][e8 ] are applied the final automaton may have size exponential in the size of the input grammar .for example , exponential behaviour is exhibited by the following class of grammars : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ s a sa + + s a s a + s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here the final automaton has states .( it records , in effect , one of three possibilities for each terminal symbol : whether it has not yet appeared , has appeared and must appear again , or has appeared and need not appear again . ) there is an important computational improvement that can be made to the algorithm as described above : instead of removing all the auxiliary symbols right at the end they can be removed progressively as soon as they are no longer required ; after formulae [ e7][e8 ] have been applied for each non - epsilon rule with dotted rules , those dotted rules may be removed from the finite - state language ( which typically makes the automaton smaller ) ; and the dotted rules corresponding to an epsilon production may be removed before formulae [ e7][e8 ] are applied .( to ` remove ' a symbol means to substitute it by : a regular operation . ) with this important improvement the algorithm gives exact approximations for the left - linear grammars _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ s s a + + s s a + s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and the right - linear grammars _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ s a s + + s a s + s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in space bounded by and time bounded by .( it is easiest to test this empirically with an implementation , though it is also possible to check the calculations by hand . )pereira and wright s algorithm gives an intermediate unfolded recogniser of size exponential in for these right - linear grammars .there are , however , both left - linear and right - linear grammars for which the number of states in the final automaton is not bounded by any polynomial function of the size of the grammar .an examples is : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ s a s s a a + + s a s s a a + a a a a a + a a a a a + + a a a a a a + x _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here the grammar has size and the final approximation has states .pereira and wright point out in the context of their algorithm that a grammar may be decomposed into ` strongly connected ' subgrammars , each of which may be approximated separately and the results composed .the same method can be used with the finite - state calculus approach : define the relation over nonterminals of the grammar s.t . iff appears on the right - hand side of a production for .then the relation , the reflexive transitive closure of intersected with its inverse , is an equivalence relation .a subgrammar consists of all the productions for nonterminals in one of the equivalence classes of .calculate the approximations for each nonterminal by treating the nonterminals that belong to other equivalence classes as if they were terminals .finally , combine the results from each subgrammar by starting with the approximation for the start symbol and substituting the approximations from the other subgrammars in an order consistent with the partial ordering that is induced by on the subgrammars .when the algorithm was applied to the 18-rule grammar shown in figure [ gramfig ] it was not possible to complete the calculations for any ordering of the rules , even with the improvement mentioned in the previous section , as the automata became too large for the finite - state calculus on the computer that was being used .( note that the grammar forms a single strongly connected component . )= nom nom mod bla= mod vp v np + mod p np vp v s + vp v vp + nom a nom vp v + nom n vp vp c vp + nom nom mod vp vp mod + nom nom s s mod s + s np s + np s s c s + np d nom s v np vp + .... % initial approximation : r ( s(s/_/0)^( * ) -(( * ) , a ) .r ( ( # a ) - ( * ) -(s+(s(_/_/_)-s(_/_/0))^( * ) ^s(s/1/0)^(( * ) ) , a ) .r ( ( # a ) - ( * ) -s(s/_/0)^( * ) ^s(s/1/2)^(( * ) ) , a ) .% formula ( 4 ) for " s - > a s b " : r ( ( # a ) - ( ( * ) ^s(s/1/0)^s(a))^s(s/1/1)^( * ) -( * ) , a ) .r ( ( # a ) - ( ( * ) ^s(s/1/2)^s(b))^s(s/1/z)^( * ) ^s(s/2/0)^(( * ) ) , a ) .r ( ( # a ) - ( ( * ) ^s(s/2/0))^s(s/2/z)^( * ) ^s(s/1/0)^(( -s(s/1/_))*)^(s(s/1/0)+s(s/1/1))^( * ) ^s(s/1/1)^(( -s(s/1/_))*)^(s(s/1/0)+s(s/1/2))^( * ) ^s(s/1/2)^(( -s(s/1/_))*)^(s(s/1/0)+s(s/1/z))^( * ) -( -s(s/1/_))*))^s(s/1/1)^( * ) -( -s(s/1/_))*))^s(s/1/2)^( * ) -( -s(s/1/_))*))^s(s/1/z)^($ * ) , a ) .% define the terminal alphabet : r ( s(s/1/0)+s(s/1/1)+s(s/1/2)+s(s/1/z)+s(s/2/0)+s(s/2/z)+s(a)+s(b ) , sigma ) .% remove the auxiliary symbols to give final result : r ( rem((#a)&((#sigma ) * ) , [ _ /_/ _ ] ) , f ) .list_fsa(f ) . .... however , it was found possible to simplify the calculation by omitting the application of formulae [ e7][e8 ] for some of the rules . ( the auxiliary symbols not involved in those rules could then be removed before the application of [ e7][e8 ] . ) in particular , when restrictions [ e7][e8 ] were applied only for the s and vp rules the calculations could be completed relatively quickly , as the largest intermediate automaton had only 406 states . yet the final result was still a useful approximation with 16 states .pereira and wright s algorithm applied to the same problem gave an intermediate automaton ( the ` unfolded recogniser ' ) with 56272 states , and the final result ( after flattening and minimisation ) was a finite - state approximation with 13 states .the two approximations are shown for comparison in figure [ prettyfig ] .each has the property that the symbols ` d ` , ` a ` and `n ` occur only in the combination ` d ` ` a` `this fact has been used to simplify the state diagrams by treating this combination as a single terminal symbol ` dan ` ; hence the approximations are drawn with 10 and 9 states , respectively .neither of the approximations is better than the other ; their intersection ( with 31 states ) is a better approximation than either .the two approximations have therefore captured different aspects of the context - free language .in general it appears that the approximations produced by the present algorithm tend to respect the necessity for certain constituents to be present , at whatever point in the string the symbols that ` trigger ' them appear , without necessarily insisting on their order , while pereira and wright s approximation tends to take greater account of the constituents whose appearance is triggered early on in the string : most of the complexity in pereira and wright s approximation of the 18-rule grammar is concerned with what is possible before the first accepting state is encountered .rimon and herz approximate the recognition capacity of a context - free grammar by extracting ` local syntactic constraints ' in the form of the left or right short context of length of a terminal .when this reduces to next(t ) , the set of terminals that may follow the terminal t. the effect of filtering with rimon and herz s next(t ) is similar to applying conditions [ e1][e6 ] from section [ algsect ] , but the use of auxiliary symbols causes two differences which can both be illustrated with the following grammar : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ s a x a b x b + x _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ on the one hand , rimon and herz s ` next ' does not distinguish between different instances of the same terminal symbol , so any , and not just the first one , may be followed by another . on the other hand , rimon and herz s ` next 'looks beyond the empty constituent in a way that conditions [ e1][e6 ] do not , so is disallowed .thus an approximation based on rimon and herz s ` next ' would be , and an approximation based on conditions [ e1][e6 ] would be .( however , the approximation becomes exact when conditions [ e7][e8 ] are added . )both pereira and wright and rood start with the lr(0 ) characteristic machine , which they first ` unfold ' ( with respect to ` stacks ' or ` paths ' , respectively ) and then ` flatten ' .the characteristic machine is defined in terms of dotted rules with transitions between them that are analagous to the conditions implied by formula [ e3 ] of section [ algsect ] .when the machine is flattened , -transitions are added in a way that is in effect simulated by conditions [ e2 ] and [ e4 ] .( condition [ e1 ] turns out to be implied by conditions [ e2][e4 ] . )it can be shown that the approximation obtained by flattening the characteristic machine ( without unfolding it ) is as good as the approximation obtained by applying conditions [ e1][e6 ] ( ) . moreover , if no nonterminal for which there is an -production is used more than once in the grammar , then .( the grammar in figure [ gramfig ] is an example for which ; the approximation found in section [ ressect ] includes strings such as ` vvccvv ` which are not accepted by for this grammar . )it can also be shown that is the same as the result of flattening the characteristic machine for the same grammar modifed so as to fulfil the afore - mentioned condition by replacing the right - hand side of every -production with a new nonterminal for which there is a single -production .however , there does not seem to be a simple correspondence between conditions [ e7][e8 ] and the ` unfolding ' used by pereira and wright or rood : even some simple grammars such as ` s a s a b s b ' are approximated differently by [ e1][e8 ] than by pereira and wright s and rood s methods .in the case of some simple examples ( such as the grammar ` s a s b ' used earlier ) the approximation algorithm presented in this paper gives the same result as pereira and wright s algorithm .however , in many other cases ( such as the grammar ` s a s a b s b ' or the 18-rule grammar in the previous section ) the results are essentially different and neither of the approximations is better than the other .the new algorithm does not share the problem of pereira and wright s algorithm that certain right - linear grammars give an intermediate automaton of exponential size , and it was possible to calculate a useful approximation fairly rapidly in the case of the 18-rule grammar in the previous section .however , it is not yet possible to draw general conclusions about the relative efficiency of the two procedures .nevertheless , the new algorithm seems to have the advantage of being open - ended and adaptable : in the previous section it was possible to complete a difficult calculation by relaxing the conditions of formulae [ e7][e8 ] , and it is easy to see how those conditions might also be strengthened .for example , a more complicated version of formulae [ e7][e8 ] might check two levels of recursive application of the same rule rather than just one level and it might be useful to generalise this to levels of recursion in a manner analagous to rood s generalisation of pereira and wright s algorithm .grimley evans , edmund , george kiraz , and stephen pulman . 1996 . compiling a partition - based two - level formalism .coling-96 , 454459 .herz , jacky , and mori rimon .1991 . local syntactic constraints .second international workshop on parsing technology ( iwpt-2 ) .kaplan , ronald , and martin kay .regular models of phonological rule systems ._ computational linguistics _ , * 20(3 )* : 33178 .kempe , and , and lauri karttunen .parallel replacement in finite state calculus .coling-96 , 622 .pereira , fernando , and rebecca wright .finite - state approximation of phrase structure grammars .proceedings of the 29th annual meeting of the association for computational linguistics , 246255 .pereira , fernando , and rebecca wright .finite - state approximation of phrase - structure grammars .cmp - lg/9603002 .raymond , darrell , and derick wood .march 1996 . the grail papers .university of western ontario , department of computer science , technical report tr-491 .rimon , mori , and jacky herz .the recognition capacity of local syntactic constraints .acl proceedings , 5th european meeting .rood , cathy .efficient finite - state approximation of context free grammars .proceedings of ecai 96 .shieber , stuart .1985 . using restriction to extend parsing algorithms for complex - feature - based formalisms .proceedings of the 23nd annual meeting of the association for computational linguistics , 145152 .van noord , gertjan .fsa utilities : manipulation of finite - state automata implemented in prolog . first international workshop on implementing automata , university of western ontario , london ontario , 2931 august 1996 .watson , bruce .1996 . implementing and using finite automata toolkits .proceedings of ecai 96 . | although adequate models of human language for syntactic analysis and semantic interpretation are of at least context - free complexity , for applications such as speech processing in which speed is important finite - state models are often preferred . these requirements may be reconciled by using the more complex grammar to automatically derive a finite - state approximation which can then be used as a filter to guide speech recognition or to reject many hypotheses at an early stage of processing . a method is presented here for calculating such finite - state approximations from context - free grammars . it is essentially different from the algorithm introduced by pereira and wright , is faster in some cases , and has the advantage of being open - ended and adaptable . |
single - particle interferometry became a remarkable tool in various fields of quantum physics and sensor technology .interferometers for coherent atoms recently investigated the nature of time and measured inertial forces and gravitational acceleration .molecule interferometers proved the wave nature of large particles and contributed to the understanding of quantum decoherence . in neutron interferometers , the quantum - mechanical phase shift due to the earth s gravitational field was observed .moreover , remarkable progress was achieved in the field of matter - wave interferometry with charged particles such as electrons and ions based on new developments concerning the beam source , the precise electron guiding , the coherent beam path separation by nanostructures and highly resolved spatial and temporal single - particle detection .this advance opened the door for experiments in aharonov - bohm physics and coulomb - induced decoherence .technical devices with interfering single particles can decrease the amount of destructive particle deposition in electron microscopy for the analysis of fragile biological specimen .all these technical and fundamental applications of single - particle interferometers are based on the high phase sensitivity and are therefore susceptible to dephasing , which can be caused by external electromagnetic oscillations , mechanical vibrations or temperature drifts .in contrast to decoherence , where actual information of the quantum state is lost to the environment , dephasing is a collective , time - dependent phase shift of the interference pattern . both , decoherence and dephasing , cause a reduction of the contrast of the time - averaged interference pattern on the detector . however , in opposite to decoherence , dephasing can in principle be corrected after the measurement , if the temporal and spatial information of the single - particle events are knownthen , two - particle correlations may be used to study the dephasing process and to reveal the undisturbed interference pattern .ever since the famous hanbury brown and twiss experiment , second - order correlations are successfully used in many research areas .thereby , noise correlations play a key role , as they give direct access to the quantum nature of the source .this understanding has not only set the fundament for modern quantum optics , but also helped to prove the quantum nature of fermions and bosons .today , noise correlation analysis is widely used in modern astrophysics , quantum atom optics and particle physics . for matter - waves ,temporal correlations have been used to analyze the counting statistics of atom lasers and to demonstrate the coherent transfer of magnetic field fluctuations onto an atom laser .spatial correlations , on the other hand , have been used to analyze atoms in optical lattices or to study many - body states , such as the mott insulator state , in cold atom physics . in previous publications have demonstrated in a biprism electron interferometer , how multifrequency dephasing caused by electromagnetic and vibrational oscillations can be corrected using second - order correlation analysis in combination with the amplitude spectrum of the correlation function .latter can be used for the identification of unknown perturbation frequencies , as according to the wiener - khintchine theorem the fourier transform of the correlation function is equal to the power spectrum of the perturbed measurement signal .for the measurements , an interference pattern was shifted artificially by external perturbations leading to a contrast reduction of the temporally integrated pattern on the detector . using the time and position information of particle impacts at the detector ,the numerical second - order correlation function was extracted . with this , we were able to reveal the unknown perturbation frequencies , corresponding amplitudes and the characteristics of the matter - wave , such as contrast and pattern periodicity .the undisturbed interference pattern could be reconstructed with the parameters of the perturbation .our method is a powerful tool to prove the wave nature of particles , even if the integrated interference pattern is vanished .therefore , it reduces the requirements for electromagnetic shielding and vibrational damping of the experimental setup , e.g. for mobile interferometers or experiments in a perturbing environment .furthermore , it can be used to sensitively detect electromagnetic and mechanical perturbations or for the spectroscopy of the electromagnetic and vibrational response spectrum of an interferometer .therefore , this technique has the potential for the application in sensor technology and can in principle be applied in interferometers for electrons , ions , neutrons , atoms and molecules that generate a spatial fringe pattern in the detection plane . for the application of the correlation analysis , the devices have to be equipped with a detector with high spatial and temporal single - particle resolution , which is available for all above mentioned interferometers .another requirement is , that the particle flight time is shorter than the cycle time of the perturbation .otherwise , the particles traverse many periods of the perturbation and therefore the perturbation is averaged out and can not be resolved .this article provides a comprehensive description of the applied theory , being the base for the experimental application of second - order correlations in single - particle interferometry . in the first chapter ,we give a detailed derivation of the two - dimensional second - order correlation theory for multifrequency perturbations leading to the equations applied in former dephasing experiments .we deduce the explicit solution for the correlation function and explain under which conditions an approximation can be applied .the characteristics of the explicit and approximate solutions are discussed and the determination of the matter - wave properties is shown . furthermore , we calculate the analytic solution of the corresponding amplitude spectrum used for the identification of the unknown perturbation frequencies and amplitudes .the invariance of the correlation function under time and space transformations is analyzed in detail and the consequence for the determination of the perturbation parameters is shown . in the second part of this article, we investigate the general characteristics of numerical correlation functions .they are typically derived from a finite set of measurement data , causing statistics and noise to play an important role .temporal and spatial discretization , then influences not only the contrast and the amplitude spectrum of the correlation function , but also the corresponding noise levels .this limits the maximal signal - to - noise ratio in the correlation analysis . from our theoretical study, we identify an optimal discretization step size for best sensitivity .using single - particle simulations of a perturbed interference pattern , we cross - check our theoretical description and show , how the correlation theory can be used to identify broad - band frequency noise .in this chapter , the theory of second - order correlations in single - particle interferometry is derived and the properties are discussed in detail .first , the contrast reduction of a time - averaged interference pattern dephased by a perturbation is analyzed .afterwards , the explicit solution of the second - order correlation function is calculated and discussed under which conditions an approximate solution is suitable. the determination of the contrast and spatial periodicity of the unperturbed interference pattern is demonstrated .the corresponding amplitude spectra used for the identification of unknown perturbation frequencies are derived . at the end of the chapter, the invariance of the correlation function under time and space transformations is analyzed in detail and the consequence for the determination of the perturbation phases is discussed .in many experiments in single - particle interferometry , the interference pattern is detected using multichannel plates ( mcps ) in conjunction with a phosphor screen .the particle impacts generate light pulses on the phosphor screen , that are temporally integrated with a charge - coupled device camera ( ccd - camera ) .an interference pattern , that is dephased by a temporal perturbation , is then irreversibly washed - out " in the spatial signal and its contrast is reduced .this behaviour shall be calculated in the following .the probability distribution , that describes the particle impacts in the detection plane forming the interference pattern , is given by where assures normalization , and indicate the contrast and wave number of the unperturbed interference pattern , with the spatial periodicity .the time - dependent perturbation is described as a superposition of harmonic frequencies with perturbation amplitudes ( peak phase deviations ) and phases .the perturbation leads to a washout of the time - averaged interference pattern .for one perturbation frequency ( ) , this can be easily seen by calculating the time - average of equation ( [ eq1 ] ) resulting in using bessel functions of first kind , the exponential function can be rewritten as yielding only for , the limit of the time integral is equal to one . for all other , it approaches zero , such that the time - averaged interference pattern becomes the perturbation thus leads to a reduced contrast given by the perturbation amplitude : , with . in figure [ fig1 ] at the top ,the dependence of the time - averaged interference pattern ( equation ( [ eq7 ] ) ) on the peak phase deviation is illustrated .the interference pattern with a spatial periodicity of {mm} ] , the contrast is zero corresponding to the first zero of .the contrast returns for larger peak phase deviations , but does not recover completely . additionally , the interference pattern is phase shifted by as the sign of changes from positive to negative . this behaviour is repeated for higher peak phase deviations and the contrast is further reduced. for multifrequency perturbations ( ) with peak phase deviations , equation ( [ eq7 ] ) becomes here , the reduced contrast depends on all peak phase deviations , via the product of the zeroth order bessel functions .therefore , the contrast is typically stronger reduced as compared to the single frequency case in figure [ fig1 ] . ) ) on the peak phase deviation for an interference pattern with a periodicity of {mm} ] , the contrast vanishes due to the first zero of .the contrast returns for larger peak phase deviations , but not completely . at each sign change of ,the phase of the interference pattern is shifted by .,scaledwidth=50.0% ] with the spatial and temporal information of the particles arriving in the detection plane , it is possible to reveal the contrast and spatial periodicity of the unperturbed interference pattern by correlation analysis .this is possible , because in the correlation function the spatial and temporal differences are taken into account in contrast to the temporally integrated interference pattern .then , on timescales below the perturbation frequency , the interference pattern is still visible and not influenced by the perturbation .furthermore , the characteristics of the perturbation ( frequencies , peak phase deviations and phases ) can be determined from the correlation analysis . with the probability distribution in equation ( [ eq1 ] ) , the second - order correlation function reads with denoting the average over position and time if the acquisition time and length are large compared to the involved perturbation frequencies and spatial periodicity , equation ( [ eq9 ] ) can be solved analytically .first , the term in the denominator of equation ( [ eq9 ] ) is calculated , using equation ( [ eq1 ] ) and ( [ eq10 ] ) in the limit of large acquisition length , the spatial limit becomes yielding using and in equation ( [ eq11 ] ) , the second term in the denominator becomes .this is expected , because shifts in time and space should not alter the long time and position average of the probability distribution . using equation ( [ eq1 ] ) and ( [ eq10 ] ) , the numerator in equation ( [ eq9 ] ) results in \,\mathrm{d}y\mathrm{d}t~.\end{aligned}\ ] ] similar as before , the second and third term in equation ( [ eq14 ] ) vanish , leaving in the limit , again , yielding with equation ( [ eq2 ] ) using equation ( [ eq5 ] ) for perturbation frequencies equation ( [ eq16 ] ) can be rewritten , yielding }}_{=\,a^+_{n_j , m_j}}\cdot\mbox{e}^{i\left(n_j+m_j\right)\omega_j t}\,\mathrm{d}t\ , + \nonumber\\ & \qquad+\mbox{e}^{iku } \int_0^t \prod_{j=1}^n \sum_{n_j , m_j}\ , \underbrace{\!j_{n_j}(\varphi_j)\,\!j_{m_j}(\varphi_j)\mbox{e}^{im_j\omega_j \tau}\mbox{e}^{i\left[n_j\left(\phi_j-\frac{\pi}{2}\right)+m_j\left(\phi_j+\frac{\pi}{2}\right)\right]}}_{=\,a^-_{n_j , m_j}}\cdot\mbox{e}^{i\left(n_j+m_j\right)\omega_j t}\,\mathrm{d}t \bigg ) \nonumber\\ & = \,f_0 ^ 2 + \lim_{t\rightarrow\infty } \frac{f_0 ^ 2}{t}\frac{k^2}{4}\,\cdot \nonumber\\ & \qquad\cdot\bigg(\mbox{e}^{-iku}\int_0^t \sum_{n_1,m_1}\cdots\sum_{n_n , m_n } a^+_{n_1,m_1}\cdots a^+_{n_n , m_n}\cdot \mbox{e}^{i\left(n_1+m_1\right)\omega_1 t}\cdots \mbox{e}^{i\left(n_n+m_n\right)\omega_n t}\,\mathrm{d}t\ , + \nonumber\\ & \qquad+\mbox{e}^{iku}\int_0^t \sum_{n_1,m_1}\cdots\sum_{n_n , m_n } a^-_{n_1,m_1}\cdots a^-_{n_n , m_n}\cdot \mbox{e}^{i\left(n_1+m_1\right)\omega_1 t}\cdots \mbox{e}^{i\left(n_n+m_n\right)\omega_nt}\,\mathrm{d}t \bigg ) \nonumber\\ & = \,f_0 ^ 2 + f_0 ^ 2\frac{k^2}{4}\,\cdot \nonumber\\ & \qquad\cdot\bigg(\mbox{e}^{-iku } \sum_{n_1,m_1}\cdots\sum_{n_n , m_n } a^+_{n_1,m_1}\cdots a^+_{n_n , m_n}\cdot\lim_{t\rightarrow\infty } \frac{1}{t}\int_0^t \mbox{e}^{i\sum_{j=1}^n\left(n_j+m_j\right)\omega_j t}\,\mathrm{d}t\ , + \nonumber\\ & \qquad+\mbox{e}^{iku } \sum_{n_1,m_1}\cdots\sum_{n_n , m_n } a^-_{n_1,m_1}\cdots a^-_{n_n , m_n}\cdot\lim_{t\rightarrow\infty } \frac{1}{t}\int_0^t \mbox{e}^{i\sum_{j=1}^n\left(n_j+m_j\right)\omega_j t}\,\mathrm{d}t \bigg)~.\end{aligned}\ ] ] a closer look to the time integral reveals , that it can only become zero or one this reduces the sums in equation ( [ eq18 ] ) to a ( single ) sum over all integer multiplets for which the constraint is fulfilled . hereit shall be noted , that for a finite acquisition time , the constraint has to be changed to because the minimal resolvable frequency is defined by the measurement time via . in the following, however , it is assumed that and therefore equation ( [ eq20 ] ) is used for the calculations and discussions .trivially , equation ( [ eq20 ] ) is satisfied for all integer multiplets with .however , depending on the specific values of , the constraint might be fulfilled by additional integer multiplets with . the constraint from equation ( [ eq20 ] ) can be expressed mathematically by introducing a function with the kernel of being the set of all integer multiplets for which and therefore the constraint in equation ( [ eq20 ] ) is fulfilled . using this definition , equation ( [ eq18 ] ) simplifies to using the result of equation ( [ eq13 ] ) and ( [ eq23 ] ) , the second - order correlation function in equation ( [ eq9 ] ) now becomes with } ~. \label{eq26}\end{aligned}\ ] ] to calculate a more descriptive representation of the correlation function and to demonstrate , that , equation ( [ eq24 ] ) , ( [ eq25 ] ) and ( [ eq26 ] ) can be further rewritten in terms of real and imaginary parts \nonumber\\ = 1 + & \frac{k^2}{2}\sum_{\substack{\left\{n_j , m_j\right\}\in \ker(c)\\ j=1\ldots n}}\tilde{b}_{\{n_j , m_j\}}\left(\varphi_j\right)\cos\left(ku+\tilde{\varphi}_{\{n_j , m_j\}}\right)\cdot \\ \nonumber & \cdot\biggl[\cos\biggl(\sum_{j=1}^n m_j\omega_j\tau+\phi_{\{n_j , m_j\}}\biggr)+i\cdot\sin\biggl(\sum_{j=1}^n m_j\omega_j\tau+\phi_{\{n_j , m_j\}}\biggr)\biggr ] ~,\end{aligned}\ ] ] with the product of the bessel functions the spatial correlation phase and temporal phase in equation ( [ eq27 ] ) are given by if the constraint in equation ( [ eq20 ] ) is fulfilled for a multiplet , it is also satisfied for . with , and ,it can be shown , that the addend with of the sum in equation ( [ eq27 ] ) is complex conjugated to the addend with .the addend of the zero multiplet is purely real valued .therefore , the imaginary part vanishes after summing up all addends and equation ( [ eq27 ] ) becomes real with the amplitude here , the sum has to be taken over all integer multiplets fulfilling the constraint in equation ( [ eq20 ] ) . in principle, the constraint is satisfied for an infinite number of multiplets each with their own contribution to the correlation function given by however , contributions with large values of are suppressed , because the bessel function ( equation ( [ eq28 ] ) ) strongly decays for .this limits the number of multiplets that have to be taken into account for the correlation analysis .the contribution to of each multiplet addend shows a periodic modulation in the correlation length with the same spatial periodicity as the unperturbed interference pattern , but shifted in -direction by the spatial correlation phase . the amplitude ( equation ( [ eq31 ] ) ) of each multiplet addend depends on the correlation time and the specific perturbation characteristics .it shows a periodic structure in with the periodicity determined by the frequency component with the coefficients .this periodic structure is shifted in -direction by the temporal phase .the amplitude of the modulation in -direction is given by the peak phase deviations , via the product of the bessel functions in equation ( [ eq28 ] ) .after summing up all addends , the resulting correlation function ( equation ( [ eq30 ] ) ) shows the same spatial periodicity .the overall amplitude is equal to 1 only at certain correlation times given by the involved perturbation frequencies ( see section [ sec2.4 ] ) . at these temporal positions ,the contrast of the correlation function is and therefore directly linked to the contrast of the unperturbed interference pattern . for all other correlation times ,the contrast of the correlation function is .it has to be noted , that the maximum contrast of the correlation function is and therefore a factor of lower than the contrast of the unperturbed interference pattern .the overall amplitude of the resulting correlation function includes the perturbation frequencies , their harmonic frequencies as well as their differences and sums ( intermodulation terms ) .all frequency components are given by the argument of the cosine in equation ( [ eq31 ] ) .approximately , the maximum frequency component per perturbation frequency included in the correlation function is given by , with , as larger frequency components are suppressed due to the strong decay of the bessel function in equation ( [ eq28 ] ) for .therefore , the maximum frequency component of all perturbation frequencies included in the correlation function is approximately given by . in the following ,an approximate solution for the correlation function is deduced , by taking into account only the trivial solution to the constraint in equation ( [ eq20 ] ) .these are the multiplets with , which typically give the main contribution to the correlation function .using , and , equation ( [ eq30 ] ) and ( [ eq31 ] ) become with and , the approximate second - order correlation function yields with the time - dependent amplitude similar to the explicit solution of the correlation function in equation ( [ eq30 ] ) and ( [ eq31 ] ) , the spatial modulation of the approximate solution is given by the spatial periodicity of the unperturbed interference pattern .the approximate solution is independent of the spatial correlation phases and temporal phases ( equation ( [ eq29 ] ) ) .therefore , the addends are not phase shifted with respect to each other in - and -direction .usually , the approximate correlation function in equation ( [ eq34 ] ) can be used for the description of multifrequency perturbations . however , in the case of few perturbation frequencies , that are multiples of each other , the constraint in equation ( [ eq20 ] ) is additionally fulfilled for and the explicit solution of the correlation function in equation ( [ eq30 ] ) has to be applied . in the case of a single perturbation frequency ,the constraint of equation ( [ eq20 ] ) is only satisfied for .thus , the explicit and approximate solution of the correlation function are identical .the determination of the contrast and spatial periodicity from the correlation function shall be discussed now . both can only be correctly obtained at certain correlation times . as mentioned in section[ sec2.2 ] and [ sec2.3 ] , the explicit and approximate solution of the correlation function show a periodic modulation in -direction having the same periodicity as the unperturbed interference pattern .the overall amplitude of this modulation ( after summing up all multiplet addends ) depends on the correlation time resulting from the specific perturbation spectrum for both correlation functions ( equation ( [ eq30 ] ) and ( [ eq34 ] ) ) .its maximum value of 1 is achieved at , with the superperiod given by the reciprocal value of the greatest common divisor of all perturbation frequencies . for each frequency is an integer for which . at the temporal positions , only the addends with sum up to a maximum value of 1 , because then the addends in equation ( [ eq30 ] ) are not phase shifted in - and -direction with respect to each other .the sum of the addends with is equal to zero at these temporal positions .therefore , the exact and approximate solution are identical at correlation times . using equation ( [ eq34 ] ) and ( [ eq35 ] ), the correlation function then becomes which is suitable to obtain the contrast and pattern periodicity of the unperturbed interference pattern .if there is no greatest common divisor , the superperiod is infinite and the only position , where the amplitudes have a maximum value of , is at .therefore , the determination of the contrast and spatial periodicity using equation ( [ eq36 ] ) can always be applied to the correlation function at the correlation time of .the properties of the explicit and approximate correlation function , discussed in section [ sec2.2 ] and [ sec2.3 ] , are illustrated below for single- and two - frequency perturbations .the commonalities and differences of both solutions are pointed out . in the case of one perturbation frequency ( ) ,the constraint in equation ( [ eq20 ] ) is only fulfilled for .thus , the explicit and approximate solution are identical .a correlation function of an interference pattern with , {mm} ] and a peak phase deviation {\pi} ] . here, becomes 1 and the contrast in the correlation function is directly linked to the contrast of the unperturbed interference pattern . according to equation ( [ eq7 ] ) and figure [ fig1 ] , the integrated interference pattern would be completely washed - out " for these perturbation parameters .using correlation theory , however , the contrast and spatial periodicity can be unveiled as described in section [ sec2.4 ] . ,{mm} ] , {\pi} ] , with .,scaledwidth=50.0% ] in the case of perturbation frequencies , that are multiples of each other , the explicit solution of the correlation function has to be used .this shall be demonstrated for two frequencies with . here ,equation ( [ eq20 ] ) becomes and the constraint is satisfied not only for integer multiplets with , but also for , as .these terms lead to additional contributions to the explicit correlation function in equation ( [ eq30 ] ) , causing a spatial and temporal phase shift due to the not vanishing phases and ( equation ( [ eq29 ] ) ) .therefore , the approximate solution of equation ( [ eq34 ] ) is not suitable , and the explicit solution has to be used . in figure [ fig3](a ) ,a second - order correlation function calculated with the explicit solution ( equation ( [ eq30 ] ) and ( [ eq31 ] ) ) is shown for an interference pattern with a contrast of and a spatial periodicity {mm} ] and {hz } , \varphi_2=\unit[0.5]{\pi } , \phi_2=\unit[-0.25]{\pi} ] can be identified in both correlation functions originating from the greatest common divisor of the involved perturbation frequencies {hz} ]perturbed with two frequencies {hz } , \varphi_1=\unit[0.5]{\pi } , \phi_1=\unit[0.25]{\pi} ] calculated according to equation ( [ eq30 ] ) and ( [ eq31 ] ) .b ) approximate solution calculated with equation ( [ eq34 ] ) and ( [ eq35 ] ) including only the terms .c ) difference between explicit and approximate solution solely given by the terms with .,scaledwidth=100.0% ] { hz } , \varphi_1=\unit[0.5]{\pi } , \phi_1=\unit[0.25]{\pi} ] as calculated according to equation ( [ eq30 ] ) .b ) approximate solution of the second - order correlation function calculated with equation ( [ eq34 ] ) .c ) difference between explicit and approximate solution given by the terms with .the structure in c ) is almost vanished for and disappeared for . at this transition point , the correlation functions in a ) and b ) are identical and the approximate solution is suitable for the analysis.,scaledwidth=100.0% ] as discussed in section [ sec2.3 ] , the explicit and approximate solution ( equation ( [ eq30 ] ) and ( [ eq34 ] ) ) are identical in the case of one perturbation frequency ( ) and also in the case of numerous frequencies , that are not multiples of each other . a common scenario , when both solutions differ from each other , is the case when the frequencies are multiples of each other , e.g. for perturbation frequencies with and . here , it shall be shown , that also in this case the approximate solution might be suitable to describe the correlation function .the explicit solution turns into the approximate , if the largest contributing addend in equation ( [ eq30 ] ) for is small compared to the largest addend with .the amplitudes of all addends are given by the peak phase deviations , via the product of the bessel functions in equation ( [ eq28 ] ) .the maximum value of the bessel function is approximately achieved , if the order is for and otherwise .therefore , the amplitude of the largest addend with is given by using equation ( [ eq20 ] ) , the constraint for the integer multiplets can be written as this constraint is generally satisfied for the multiplet with , resulting in the largest contributing integer multiplet using this multiplet and for the order of the bessel function , the amplitude ( equation ( [ eq28 ] ) ) of the largest addend with contributing to the correlation function is here , the variable factor is given by and determines the value of .the ratio of equation ( [ eq37 ] ) and ( [ eq40 ] ) is a closer look to the bessel function unveils that decays rapidly for , which can be seen with the asymptotic form of the bessel function for with denoting the gamma function .therefore , the explicit solution approaches rapidly to the approximate solution , once .figure [ fig4 ] shows an example for a two frequency perturbation with . for {\pi} ] and {hz } , \varphi_2=\unit[0.5]{\pi } , \phi_2=\unit[-0.25]{\pi} ] and time separations ] perturbed with {hz} ] .a ) the structure is smeared out " , because the discretization step size is too large .b ) compared to a ) , the structure is clearly visible , but the noise increases with smaller and .c ) for even smaller discretization step size , the noise grows larger.,scaledwidth=100.0% ] three correlation functions with different spatial and temporal discretization step sizes are illustrated in figure [ fig6 ] .they are extracted using equation ( [ eq62 ] ) from a single - particle simulation of an interference pattern ( section [ sec3.3 ] ) with a contrast and a spatial periodicity {mm} ] and {\pi} ] .the dependence of the extracted contrast on the temporal discretization step size and the peak phase deviation is shown in figure [ fig7](b ) for .{ \pi}\right)/k ] .b ) dependence of on the temporal discretization and peak phase deviation for .,scaledwidth=90.0% ] to determine the perturbation frequency and corresponding peak phase deviation from the correlation function , the analytic amplitude spectrum of equation ( [ eq47 ] ) is used . for discrete signals ,the amplitude spectrum also depends on and .this dependence is calculated from equation ( [ eq63 ] ) and ( [ eq64 ] ) with a temporal fourier transformation , yielding with the frequency components .comparing equation ( [ eq68 ] ) with the analytic solution in equation ( [ eq47 ] ) for , it can be seen , that the frequency components are not changed , but their amplitudes are modified in equation ( [ eq68 ] ) due to the spatial and temporal discretization .this dependence is calculated for the amplitude of the fundamental frequency component in equation ( [ eq68 ] ) with and , resulting in with the modified amplitude of the fundamental frequency component depending on and here , the amplitude of the fundamental frequency component is reduced by the factor of , which needs to be taken into account for the determination of . therefore , the peak phase deviation extracted from the amplitude spectrum of the correlation function also depends on the discretization step size via the square of the bessel function . for and , equation ( [ eq69 ] )results in yielding the correct peak phase deviation .as discussed in section [ sec3.1 ] , a real experiment requires the correlation function to be derived from a finite number of detection events .more precisely , it is calculated from the number of correlated particle pairs within a given correlation window and . due to the statistical nature of the particles , is subject to poissonian noise , which is transferred onto the correlation function and the corresponding amplitude spectrum . in principle, this limits the signal - to - noise ratio and the minimal detectable perturbation amplitude . in the following section , this effect is estimated and optimal settings for the discretization step size are found .therefore , all fluctuating variables are described by their corresponding mean values and variances , with being the standard deviation for simplicity reasons the analysis is restricted to correlation times and positions . with the correlation function from equation ( [ eq62 ] ) being normalized to 1 , it can be found the expected standard deviation can then be calculated yielding with following a poissonian distribution , variance and mean value are directly linked yielding as expected , the variance ( noise ) of the correlation function depends on the total number of detected particles and the number of bins and in the temporal and spatial direction . before calculating , how the noise in the correlation function transfers onto its amplitude spectrum , the correlation function is split in two parts : the first part describing the ideal correlation function , as expected in the limit of infinite detection events and the second part describing the noise only obviously , mean values and standard deviations of these functions are given to following this description , the amplitude spectrum of the correlation function reads with and denoting the discrete fourier transforms of the time discrete signals and here , denotes the maximum correlation time up to which the correlation function is evaluated .the noise in the spectrum is thus solely included in .using parsival s theorem together with , it can be found and thus a direct link between the noise in the power spectrum and the noise in the correlation function ( equation ( [ eq75 ] ) ) the signal - to - noise ratio of the amplitude spectrum used for the determination of the peak phase deviation can be calculated with equation ( [ eq69 ] ) , ( [ eq70 ] ) and ( [ eq81 ] ) yielding with for , the -function approaches to 1 and the signal - to - noise ratio only depends on the spatial discretization step size .the function has a global maximum at the position with the maximum value of .hence , the optimum spatial discretization step size , leading to a maximum of the signal - to - noise ratio , becomes the signal - to - noise ratio calculated with equation ( [ eq82 ] ) is plotted in figure [ fig8 ] for different spatial and temporal discretization step sizes .the optimum spatial discretization step size is clearly visible at . from equation ( [ eq82 ] ) and( [ eq83 ] ) , the optimum signal - to - noise ratio is deduced for and a lower limit for the identification of small peak phase deviations can be derived from equation ( [ eq85 ] ) by setting equal to 1 .this sets the threshold at which the noise and signal have equal amplitude . with and equation ( [ eq85 ] ) , this yields for the measurement of small peak phase deviations it is thus favourable to have an interference pattern with large contrast and pattern periodicity .furthermore , a large number of particles decreases the minimum detectable peak phase deviation . ) andnormalized to ( equation ( [ eq83 ] ) ) and .the optimum spatial discretization can be identified at .,scaledwidth=50.0% ] to numerically cross - check the theoretical calculations in section [ sec3.2 ] , a set of particles with temporal and spatial coordinates have been simulated .therefore , time and position coordinates are generated according to the corresponding distribution functions using the acceptance - rejection method .for the temporal coordinates , the probability distribution of time differences between successive events is used , which for poisson statistics is given by here , denotes the mean count rate . following this distribution function , a set of time differences generated .starting with the first event at , the time steps for successive events are given by after having generated all time coordinates , the corresponding spatial coordinates are created according to the probability distribution with the time - dependent perturbation from equation ( [ eq2 ] ) .this results in a full set of time and position coordinates . and a fixed temporal discretization step size .they are extracted from a single - particle simulation of an interference pattern with and {mm} ] and {\pi} ] and a single frequency perturbation with {hz} ] . with an acquisition time of {s} ] , particles have been simulated .this parameters have been chosen in accordance to typical experimental parameters .the correlation function was extracted from the simulated data according to equation ( [ eq62 ] ) for different spatial discretization step sizes ( ) , a temporal discretization step size ( {ms} ] .the temporal discretization step size is chosen to ensure that it does not reduce the signal - to - noise ratio ( c.f .figure [ fig8 ] for ) .three correlation functions for different spatial discretization step sizes are illustrated in figure [ fig9 ] . with increasing from figure [ fig9](a ) to ( c ) ,the structure in the correlation function is smeared out " .otherwise , the noise decreases for a larger spatial discretization step size , because the mean particle number increases ( equation ( [ eq75 ] ) ) .the relation between this two effects leads to an optimum ( equation ( [ eq84 ] ) ) that provides a maximum of the signal - to - noise ratio ( see equation ( [ eq85 ] ) ) . as the correlation functions in figure [ fig9 ] ,many correlation functions with different spatial discretisation step sizes ranging from to are extracted from the above simulation according to equation ( [ eq62 ] ) . for each correlation function , the amplitude spectrum was calculated at the spatial position {mm} ] position and the standard deviation of the noise were determined in each spectrum .the result for the extracted signal height is plotted in figure [ fig10](a ) ( blue dots ) .the standard deviation of the noise is shown in [ fig10](b ) and the signal - to - noise ratio in figure [ fig10](c ) .the theoretical curve in figure [ fig10](a ) ( red solid line ) was calculated using equation ( [ eq69 ] ) with the parameters of the simulation .for , the signal height is significantly reduced , because the structure in the correlation functions begins to smear out " ( see figure [ fig9](b ) ) until it is totally vanished for ( see figure [ fig9](c ) ) . the theoretical curve in figure [ fig10](b )is evaluated with equation ( [ eq81 ] ) indicating that the noise is reduced for larger due to the increasing mean particle number in equation ( [ eq75 ] ) . the theoretical signal - to - noise ratio in figure [ fig10](c )is calculated according to equation ( [ eq82 ] ) and ( [ eq83 ] ) illustrating the predicted optimum at .the corresponding correlation function is shown in figure [ fig9](b ) . the minimum peak phase deviation in equation ( [ eq86 ] ) can be calculated with the above values and yields {\pi} ] position in the amplitude spectrum of the correlation function ( blue dots ) .the solid red line indicates the theoretical signal height calculated with equation ( [ eq69 ] ) .b ) the standard deviation of the noise determined from the amplitude spectrum . the theoretical curve was evaluated using equation ( [ eq81 ] ) .c ) the signal - to - noise ratio of the data points in a ) and b ) shows a good agreement to the theory calculated according to equation ( [ eq82 ] ) and ( [ eq83 ] ) .the optimum spatial discretization can be identified at .,scaledwidth=100.0% ] to demonstrate the possibility , using the correlation analysis to describe not only single perturbation frequencies but also broad - band noise spectra , the following single - particle simulation has been made .the temporal coordinates of the particles are generated in the same way as described in section [ sec3.3 ] .the perturbation caused by a broad - band frequency noise is given by the corresponding amplitude spectrum and phase spectrum .this is different to the former simulation , where the time - dependent perturbation was given by equation ( [ eq2 ] ) . with the amplitude and phase spectrum , the time - dependent perturbation for the temporal coordinate be calculated using a discrete fourier transformation with the number of frequencies in the spectrum .the spatial coordinate is determined as before according to the probability distribution ( equation ( [ eq89 ] ) ) with the calculated phase shift of equation ( [ eq90 ] ) . forthe simulation demonstrated here , a gaussian distributed noise spectrum with uncorrelated phases is applied .the discrete amplitude spectrum is thus given by and the phase spectrum is randomly distributed between and .here , denotes the maximum peak phase deviation , the central frequency and the frequency standard deviation ( band width ) . .it is calculated with a numerical fourier transformation from the time - dependent perturbation ( equation ( [ eq90 ] ) ) used for the simulation in equation ( [ eq89 ] ) .theoretical amplitude spectrum ( red solid line ) resulting from the fit of equation ( [ eq92 ] ) to the correlation function extracted from the simulated data .the determined characteristics of the gaussian distributed noise in equation ( [ eq91 ] ) are {\pi} ] and {hz} ] was perturbed by a gaussian distributed noise according to equation ( [ eq91 ] ) with {\pi} ] and {hz} ] , that were chosen to have a good signal in the correlation function . in figure [ fig12](a ) ,the resulting correlation function is shown .the contrast and spatial periodicity {mm} ] ( see section [ sec2.4 ] ) .the superperiod of {ms} ] .the contrast of the correlation function decays on timescales of . in figure [ fig12](a ) ,the contrast is almost vanished for {ms} ] is reached . for larger , more frequency components with random phasescontribute to the perturbation and the resulting time - dependent perturbation becomes more uncorrelated between two time stamps .therefore , the corresponding particles are also uncorrelated and the contrast in the correlation function vanishes for shorter correlation times until it is completely lost . and {mm} ] , {hz} ] and a frequency resolution of .the superperiod of {ms} ] .with the full width at half maximum ( fwhm ) of the applied noise spectrum being {hz} ] .b ) theoretical correlation function resulting from the fit with equation ( [ eq92 ] ) . here, the argument of the bessel function is given by the discrete amplitude spectrum in equation ( [ eq91]).,scaledwidth=70.0% ] to determine the characteristics of the gaussian distributed noise ( , and ) , equation ( [ eq34 ] ) and ( [ eq35 ] ) are used together with the amplitude spectrum of the applied perturbation in equation ( [ eq91 ] ) as argument of the bessel function the approximate correlation function can be used for the theoretical description of broad - band frequency noise as long as the number of involved frequencies is large enough , so that the constraint in equation ( [ eq20 ] ) is only fulfilled for .the contrast and spatial periodicity extracted according to section [ sec2.4 ] are fixed parameters for the fit to the correlation function of the simulation in figure [ fig12](a ) .the fit parameters are , and in the discrete amplitude spectrum of the gaussian distributed noise ( equation ( [ eq91 ] ) ) .the resulting theoretical correlation function is illustrated in figure [ fig12](b ) and shows a good agreement with the correlation function of the simulation .the corresponding amplitude spectrum resulting from the fitted theoretical correlation function is plotted in figure [ fig11 ] ( red solid line ) .it is also in good agreement with the amplitude spectrum of the applied perturbation ( blue solid line ) .the determined characteristics of the gaussian distributed noise in equation ( [ eq91 ] ) are {\pi} ] and {hz}].,scaledwidth=50.0% ] in figure [ fig13 ] , the amplitude spectra of the simulated and theoretical correlation function , ( blue solid line ) and ( red solid line ) , are plotted .both are calculated with a numerical fourier transformation at {mm} ] and positive frequencies using the amplitude spectrum of the perturbation in equation ( [ eq91 ] ) as argument of the bessel function the broad frequency distribution around originates from the fundamental frequencies of the applied perturbation .they are represented in equation ( [ eq93 ] ) by the first order of the bessel function .the distributions around and are generated by the sum and difference frequencies ( intermodulation terms ) of the perturbation frequencies .additionally , the distribution around originates from the sum of three frequency components of the applied perturbation .if the properties of the applied perturbation are not known a priori , the amplitude spectrum can be used to get a reference to the shape and frequency characteristics , because they are included in the spectrum . in figure[ fig13 ] , the central frequency of can be identified at the position of the maximum frequency distribution in the amplitude spectrum and used as starting value for the theoretical fit function ( equation ( [ eq92 ] ) ) .if the central frequency of the perturbation would have been , for example , the distribution around would not be present in the spectrum .the frequency standard deviation of the frequency distribution around is broadened because of additional terms in equation ( [ eq93 ] ) , that do not correspond to the fundamental perturbation frequencies .however , it can be used as maximum frequency standard deviation for the theoretical fit function .the applied perturbation spectrum can be identified , if the frequency distributions contained in the resulting amplitude spectrum are separated .they can overlap , if the amplitudes of the perturbation spectrum are large or for very broad spectra .the second - order correlation analysis can not be applied under all conditions .especially , the possibility to determine the perturbation frequency and amplitude is crucial .therefore , the limits of the applicability of the second - order correlation analysis shall be pointed out .if the time of flight , that the particles spend in the area of perturbation , is large as compared to the cycle time of the oscillation , it is not possible to resolve the perturbation frequency , because the particles traverse many periods of the oscillation and therefore the perturbation is averaged out . in this case , the correlation analysis can not be applied . in general ,the second - order correlation theory can be used for periodic oscillations , even if the average particle count rate is lower than the perturbation frequency , because of the infinite coherence of such a perturbation . due to the strong decay of the bessel function ,the highest order per perturbation frequency contributing to the correlation function is .therefore , the maximum frequency component of all perturbation frequencies included in the correlation function is given by , which arises from the argument of the cosine in equation ( [ eq31 ] ) and ( [ eq35 ] ) .for slow and random perturbations with a high peak phase deviation this product sets a lower limit for the average particle count rate to get a good agreement between experiment and theory .single - particle interferometry is an outstanding instrument in the field of quantum physics and sensor applications . due to the high sensitivity of interferometers , they are susceptible to dephasing effects originating from electromagnetic oscillations , mechanical vibrations or temperature drifts .compared to decoherence , dephasing is a collective shift of the particle wave function and the contrast is only reduced in the temporally integrated interference pattern .therefore , dephasing can in principle be reversed .using second - order correlation theory , the wave properties can be identified and the perturbation characteristics can be determined .this was demonstrated in former publications for electromagnetic perturbations and mechanical vibrations .this paper provides the theoretical fundament for those articles and other future applications in various fields of single - particle interferometry .it gives a detailed description of the analytic solution to the second - order correlation function and its numerical application .we presented the full analytic derivation of our two - dimensional second - order correlation theory for multifrequency perturbations .the difference between the explicit and approximate solution was discussed and areas of validity were investigated .the amplitude spectra of both solutions , that are used for the identification of the perturbation characteristics , have been calculated .we provided the numerical solution of the correlation function and investigated the dependence of the extracted contrast and perturbation amplitude on the discretization step size .the influence of noise on the correlation function and corresponding amplitude spectrum was investigated and an optimum spatial discretization step size was provided to achieve a maximum signal - to - noise ratio . the validity of our calculations could be demonstrated with a single - particle simulation of a perturbed interference pattern evaluated for different spatial discretization step sizes . the possibility to analyze broad - band frequency noise was shown using a simulated interference pattern perturbed by gaussian distributed noise .our method is a powerful tool for the proof of single - particle interferences , even if they are vanished in the spatial signal . especially for mobile interferometers or experiments in perturbing environments ,the requirements for vibrational damping and electromagnetic shielding can be reduced .furthermore , it is suitable to analyze the characteristics of multifrequency perturbations and broad - band noise .therefore , it has possible sensor applications , which was demonstrated for mechanical vibrations in an electron interferometer .it can be used in principle in every interferometer generating a spatial interference pattern on a detector with high spatial and temporal single - particle resolution .this makes our method applicable in a wide range of experiments .we gratefully acknowledge support from the dfg through the sfb trr21 and the emmy noether program sti 615/1 - 1 .a r acknowledges support from the evangelisches studienwerk ev villigst .the authors thank n kerker and a pooch for helpful discussions .01 cronin a d , schmiedmayer j and pritchard d e 2009 _ rev . mod .phys . _ * 81 * 1051 carnal o and mlynek j 1991 _ phys .lett . _ * 66 * 2689 keith d w , ekstrom c r , turchette q a and pritchard d e 1991 _ phys .lett . _ * 66 * 2693 margalit y , zhou z , machluf s , rohrlich d , japha y and folman r 2015 _ science _ * 349 * 1205 arndt m and brand c 2015 _ science _ * 349 * 1168 gustavson t l , bouyer p and kasevich m a 1997 _ phys . rev .lett . _ * 78 * 2046 peters a , chung k y and chu s 1999 _ nature _ * 400 * 849 grisenti r e , schllkopf w , toennies j p , hegerfeldt g c , khler t and stoll m 2000 _ phys .lett . _ * 85 * 2284 brezger b , hackermller l , uttenthaler s , petschinka j , arndt m and zeilinger a 2002 _ phys .lett . _ * 88 * 100404 gerlich s , eibenberger s , tomandl m , nimmrichter s , hornberger k , fagan p j , txen j , mayor m and arndt m 2011 _ nature comm . _* 2 * 263 arndt m and hornberger k 2014 _ nature phys ._ * 10 * 271 haslinger p , drre n , geyer p , rodewald j , nimmrichter s and arndt m 2013 _ nature phys . _ * 9 * 144 zurek w h 2003 _ revphys . _ * 75 * 715 hackermller l , hornberger k , brezger b , zeilinger a and arndt m 2004 _ nature _ * 427 * 711 hornberger k , uttenthaler s , brezger b , hackermller l , arndt m and zeilinger a 2003 _ phys . rev . lett . _ * 90 * 160401 rauch h , treimer w and bonse u 1974 _ phys . lett .a _ * 47 * 369 colella r , overhauser a w and werner s a 1975 _ phys .lett . _ * 34 * 1472 hasselbach f 2010 _ rep .phys . _ * 73 * 016101 hasselbach f and maier u 1999 _ quantum coherence and decoherence - proc .isqm - tokyo 98 _ ed . by y.y .ono and k. fujikawa ( amsterdam : elsevier ) 299 maier u 1997 _ doctoral thesis _, university of tbingen kuo h s , hwang i s , fu t y , lin y c , chang c c and tsong t t 2006 _ japanese j. appl .phys . _ * 45 * 8972 ehberger d , hammer j , eisele m , krger m , noe j , hgele a and hommelhoff p 2015 _ phys . rev ._ * 114 * 227601 hommelhoff p , kealhofer c and kasevich m a 2006 _ phys .lett . _ * 97 * 247402 hasselbach f 1988 _ z. phys .b _ * 71 * 443 hammer j , hoffrogge j , heinrich s and hommelhoff p 2014 _ phys* 2 * 044015 schtz g , rembold a , pooch a , meier s , schneeweiss p , rauschenbeutel a , gnther a , chang w t , hwang i s and stibor a 2014 _ ultramicroscopy _ * 141 * 9 chang c c , kuo h s , hwang i s and tsong t t 2009 _ nanotechnology _ * 20 * 115401 cho b , ichimura t , shimizu r and oshima c 2004 _ phys . rev .lett . _ * 92 * 246103 jagutzki o , mergel v , ullmann - pfleger k , spielberger l , spillmann u , drner r and schmidt - bcking h 2001 _ nucl .research a _ * 477 * 244 aharonov y and bohm d 1959 _ phys .* 115 * 485 batelaan h and tonomura a 2009 _ physics today _ * 62 * 38 schtz g , rembold a , pooch a , prochel h and stibor a 2015 _ ultramicroscopy _ * 158 * 65 sonnentag p and hasselbach f 2007 _ phys .* 98 * 200402 scheel s and buhmann s y 2012 _ phys .a _ * 85 * 030101(r ) putnam w p and yanik m f 2009 _ phys .rev . a _ * 80 * 040902(r ) kruit p , hobbs r g , kim c s , yang y , manfrinato v r , hammer j , thomas s , weber p , klopfer b , kohstall c et al . 2016_ ultramicroscopy _ * 164 * 3145 grattan l s and meggitt b t 2013 eds . _optical fiber sensor technology : fundamentals _ , springer science & business media abbott b p et al .2016 _ phys .lett . _ * 116 * 061102 graham p w , hogan j m , kasevich m a and rajendran s 2013 _ phys .* 110 * 171102 hanbury brown r and twiss r q 1956 _ nature _ * 177 * 27 glauber r j 1963 _ phys . rev . _ * 130 * 2529 kiesel h , renz a and hasselbach f 2002 _ nature _ * 418 * 392 schellekens m , hoppeler r , perrin a , gomes j v , boiron d , aspect a and westbrook c i 2005 _ science _ * 310 * 648 foellmi c 2009 _ astronomy & astrophysics _ * 507 * 1719 simon j , bakr w s , ma r , tai m e , preiss p m , and greiner m 2011 _ nature _ * 472 * 307 agakishiev g et al .2012 _ the european physical journal a - hadrons and nuclei _ * 47 * 1 ttl a , ritter s , khl m and esslinger t 2005 _ phys .* 090404 federsel p , rogulj c , menold t , darzs z , domokos p , gnther a and fortgh j 2017 arxiv:1702.03159 grondalski j , alsing p m and deutsch i h 1999 _ optics express _ * 5 * 249 altman e , demler e and lukin m d 2004 _ phys . rev .a _ * 70 * 013603 flling s , gerbier f , widera a , mandel o , gericke t and bloch i 2005 _ nature _ * 434 * 481 rembold a , schtz g , chang w t , stefanov a , pooch a , hwang i s , gnther a and stibor a 2014 _ phys . rev .a _ * 89 * 033635 gnther a , rembold a , schtz g and stibor a 2015 _ phys .rev . a _ * 92 * 053607 rembold a , schtz g , rpke r , chang w t , hwang i s , gnther a and stibor a 2017 _ new j. phys . _* 19 * 033009 mllenstedt g and dker h 1956 _ z. phys .a - hadron nucl . _* 145 * 377 wiener n 1930 _ acta math ._ * 55 * 117258 khintchine a 1934 _ math . ann . _ * 109*(1 ) 604 siegmund o h , vallerga j v , tremsin a s , mcphate j and feller b 2007 _ nucl .instr . meth .research a _ * 576 * 178 schellekens m , hoppeler r , perrin a , viana gomes j , boiron d , aspect a and westbrook c i 2005 _ science _ * 310 * 648 zhou x , ranitovic p , hogle c w , eland j h d , kapteyn h c and murnane m m 2012 _ nature physics _ * 8 * 232 hasselbach f , nicklaus m 1993 _ phys . rev . a _ * 48 * 143 abramowitz m and stegun i a 1964 _ handbook of mathematical functions _ ,courier corporation * 55 * 360 arfken g b and weber h j 1999 _ mathematical methods for physicists _ , aapt * 67 * 165 casella g , robert c p and wells m t 2004 _ lecture notes - monograph series _ 342347 haight f a 1967 _ handbook of the poisson distribution _ | interferometers with single particles are susceptible for dephasing perturbations from the environment , such as electromagnetic oscillations or mechanical vibrations . on the one hand , this limits sensitive quantum phase measurements as it reduces the interference contrast in the signal . on the other hand , it enables single - particle interferometers to be used as sensitive sensors for electromagnetic and mechanical perturbations . recently , it was demonstrated experimentally , that a second - order correlation analysis of the spatial and temporal detection signal can decrease the electromagnetic shielding and vibrational damping requirements significantly . thereby , the relevant matter - wave characteristics and the perturbation parameters could be extracted from the correlation analysis of a spatially washed - out " interference pattern and the original undisturbed interferogram could be reconstructed . this method can be applied to all interferometers that produce a spatial fringe pattern on a detector with high spatial and temporal single - particle resolution . in this article , we present and discuss in detail the used two - dimensional second - order correlation theory for multifrequency perturbations . the derivations of an explicit and approximate solution of the correlation function and corresponding amplitude spectra are provided . it is explained , how the numerical correlation function is extracted from the measurement data . thereby , the influence of the temporal and spatial discretization step size on the extracted parameters , as contrast and perturbation amplitude , is analyzed . the influence of noise on the correlation function and corresponding amplitude spectrum is calculated and numerically cross - checked by a comparison of our theory with numerical single - particle simulations of a perturbed interference pattern . thereby , an optimum spatial discretization step size is determined to achieve a maximum signal - to - noise ratio . our method can also be applied for the analysis of broad - band frequency noise , dephasing the interference pattern . using gaussian distributed noise in the simulations , we demonstrate that the relevant matter - wave parameters and the applied perturbation spectrum can be revealed by our correlation analysis . |
[ s1.3].motivation[s1.3.1].some points of history[s1.3.2].some early applications[s2].signal processing[s2.1].filters in communications engineering[s2.2].algorithms for signals and for wavelets[s2.2.1].pyramid algorithms[s2.2.2].subdivision algorithms[s2.2.3].wavelet packet algorithms[s2.2.4].lifting algorithms : sweldens and more[s2.2bis].factorization theorems for matrix functions [ snew5aug2.3.4].matrix completion[s2.2bis.4].connections between matrix functions and signal processingappendix a : topics for further research[s3].connection between the discrete signals and the wavelets [ s3.1].wavelet geometry in [ s3.4.1].cycles [ s3.4.2].the ruelle - lawton wavelet transfer operator [ s4].other topics in wavelets theory [ s4.1].invariants [ s4.1.1].invariants for wavelets : global theory [ s4.1.2].invariants for wavelet filters : local theory [ s4.2].function classes [ s4.2.1].function classes for wavelets [ s4.2.2].function classes for filters [ s4.3].wavelet sets [ s4.4].spectral pairs appendix b : duality principles in analysisacknowledgementsreferences while this series of four lectures will be on the subject of wavelets , the emphasis will be on some interconnections between topics in the mathematics of wavelets and other areas , both within mathematics and outside .connections to operator theory , to quantum theory , and especially to signal processing will be studied .concepts such as high - pass and low - pass filters have become synonymous with wavelet tools , but they have also had a significance from the very start of signal processing , for example early telephone signals over transatlantic cables .this was long before the much more recent advances in wavelets which started in the mid-1980 s ( as a resumption , in fact , of ideas going back to alfred haar much earlier ) . since the mid-1980 s wavelet mathematics has served to some extent as a clearing house for ideas from diverse areas from mathematics , from engineering , as well as from other areas of science , such as quantum theory and optics .this makes the interdisciplinary communication difficult , as the lingo differs from field to field ; even to the degree that the same term might have a different name to some wavelet practitioners from what is has to others . in recognition of this fact , chapter 1 in the recent wavelet book samples a little dictionary of relevant terms .parts of it are reproduced here : * * multiresolution : * _ real world : _ a set of band - pass - filtered component images , assembled into a mosaic of resolution bands , each resolution tied to a finer one and a coarser one . + _ mathematics : _ used in wavelet analysis and fractal analysis , multiresolutions are systems of closed subspaces in a hilbert space , such as , with the subspaces nested , each subspace representing a resolution , and the relative complement subspaces representing the detail which is added in getting to the next finer resolution subspace . ** matrix function : * a function from the circle , or the one - torus , taking values in a group of -by- complex matrices . ** wavelet : * a function , or a finite system of functions , such that for some scale number and a lattice of translation points on , say , a basis for can be built consisting of the functions , .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ then dulcet music swelled + concordant with the life - strings of the soul ; + it throbbed in sweet and languid beatings there , + catching new life from transitory death ; + like the vague sighings of a wind at even + that wakes the wavelets of the slumbering sea__ + shelley , _ queen mab _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * * subband filter : * _ engineering : _ signals are viewed as functions of time and frequency , the frequency function resulting from a transform of the time function ; the frequency variable is broken up into bands , and up - sampling and down - sampling are combined with a filtering of the frequencies in making the connection from one band to the next .+ _ wavelets : _ scaling is used in passing from one resolution to the next ; if a scale is used from to the next finer resolution , then scaling by takes to a coarser resolution represented by a subspace of , but there is a set of functions which serve as multipliers when relating to , and they are called subband filters . * * cascades : * _ real world : _ a system of successive refinements which pass from a scale to a finer one , and so on ; used for example in graphics algorithms : starting with control points , a refinement and masking coefficients are used in a cascade algorithm yielding a cascade of masking points and a cascade approximation to a picture .+ _ wavelets : _ in one dimension the scaling is by a number and a fixed simple function , for example of the form { } } \put(1,-0.0625){\makebox(0,0)[t]{ } } \end{picture} 1 l^{2} \ell^{2} \psi \tilde{\psi} \psi_{r , s}\left ( x\right ) = r^{-\frac{1}{2}}\psi\left ( \frac{x - s}{r}\right ) c_{\psi}=\int_{\mathbb{r}}\frac{d\omega}{\left| \omega\right| } \left| \smash{\hat{\psi}}\left ( \omega\right ) \right| ^{2}<\infty \tilde{\psi} c_{\psi,\tilde{\psi}}=\int_{\mathbb{r}}\frac{d\omega}{\left| \omega\right| } \overline{\hat{\psi}\left ( \omega\right ) } \ , \hat{\tilde{\psi}}\left ( \omega\right ) ] n=2^-1_^2^ |_ r , s_r , s| = 1_l^2 c_,^-1_^2^ | _ r , s | = 1_l^2 _ j_k | _ j , k | = 1_l^2 ^ 2 ^ 2 \multicolumn{2}{c}{\parbox{0.9\textwidth } { consult chapter 3 of \cite{kai94 } for the continuous resolution\index{resolution } , and section 2.2 of \cite{brjo02b } for the discrete resolution\index{resolution}. if are vectors in a hilbert\index{hilbert!space@--- space } space , then the operator is defined by the identity for all . then the assertions in the first table amount to:\rule[-12pt]{0pt}{12pt}}}\\ \displaystyle\operatorname{mf}\left ( d\right ) z\in\mathbb{t} c\left ( \mathbb { t},\mathrm{u}_{n}\left(\mathbb{c}\right)\right ) d\bullet\mkern6mu\displaystyle\operatorname{wf}\left ( d\right ) n \left ( m_{0},\dots , m_{n-1 } \right ) m_{i} z\in\mathbb { t} d\bullet\mkern6mu\displaystyle\operatorname{sf}\left ( d\right ) n \left ( \varphi,\psi_{1},\dots,\psi_{n-1}\right ) \left [ 0,d\right] ] with the tempered - distribution topology .then is homeomorphic to a compact algebraic variety .furthermore , for two elements , the following conditions are equivalent : 1 .[ thmhom.3(1)]the elements and can be connected to each other by a continuous path in ; 2 .[ thmhom.3(2)] ; 3 .[ thmhom.3(3)]the elements and can be connected to each other by a continuous path in some .in addition to the general background material in the present section , the reader may find a more detailed treatment of some of the current research trends in wavelet analysis in the following papers : ( a book review ) , ( a survey ) , and the research papers , , , , , and . as a mathematical subject, the theory of wavelets draws on tools from mathematics itself , such as harmonic analysis and numerical analysis .but in addition there are exciting links to areas outside mathematics .the connections to electrical and computer engineering , and to image compression and signal processing in particular , are especially fascinating .these interconnections of research disciplines may be illustrated with the two subjects ( 1 ) wavelets and ( 2 ) subband filtering [ from signal processing ] .while they are quite different , and have distinct and independent lives , and even have different aims , and different histories , they have in recent years found common ground .it is a truly amazing success story .advances in one area have helped the other : subband filters are absolutely essential in wavelet algorithms , and in numerical recipes used in subdivision schemes , for example , and especially in jpeg 2000an important and extraordinarily successful image - compression code .jpeg uses nonlinear approximations and harmonic analysis in spaces of signals of bounded variation .similarly , new wavelet approximation techniques have given rise to the kind of data - compression which is now used by the fbi [ via a patent held by two mathematicians ] in digitizing fingerprints in the u.s .it is the happy marriage of the two disciplines , signal processing and wavelets , that enriches the union of the subjects , and the applications , to an extraordinary degree .while the use of high - pass and low - pass filters has a long history in signal processing , dating back more than fifty years , it is only relatively recently , say the mid-1980 s , that the connections to wavelets have been made .multiresolutions from optics are the bread and butter of wavelet algorithms , and they in turn thrive on methods from signal processing , in the quadrature mirror filter construction , for example . the effectiveness of multiresolutions in data compression is related to the fact that multiresolutions are modelled on the familiar positional number system : the digital , or dyadic , representation of numbers .wavelets are created from scales of closed subspaces of the hilbert space with a scale of subspaces corresponding to the progression of bits in a number representation . while oversimplified here , this is the key to the use of wavelet algorithms in digital representation of signals and images .the digits in the classical number representation in fact are quite analogous to the frequency subbands that are used _ both _ in signal processing and in wavelets . the two functions{ccc}\varphi\left ( x\right ) = \left\ { \renewcommand{\arraystretch}{1.125}\begin{array } [ c]{ll}1\phantom{- } & 0\leq x<1\\ 0 &\text{elsewhere}\end{array } \right . & \text{and } & \psi\left ( x\right ) = \left\ { \renewcommand{\arraystretch}{1.125}\begin{array } [ c]{ll}1 & 0\leq x<\frac{1}{2}\\ -1 & \frac{1}{2}\leq x<1\\ 0 & \text{elsewhere}\end{array } \right . \\ & & \\ \setlength{\unitlength}{36pt}\begin{picture}(2.25,3)(-0.5,-1.5)\put(-0.5,0){\vector(1,0){2.25}}\put(0,-1.5){\vector(0,1){3}}\thicklines\put(-0.125,0){\line(1,0){0.125}}\put(0,0){\line(0,1){1}}\put(0,1){\line(1,0){1}}\put(1,1){\line(0,-1){1}}\put(1,0){\line(1,0){0.125}}\put(0.5,1.0625){\makebox(0,0)[b]{}}\end{picture } & & \setlength{\unitlength}{36pt}\begin{picture}(2.25,3)(-0.5,-1.5)\put(-0.5,0){\vector(1,0){2.25}}\put(0,-1.5){\vector(0,1){3}}\thicklines\put(-0.125,0){\line(1,0){0.125}}\put(0,0){\line(0,1){1}}\put(0,1){\line(1,0){0.5}}\put(0.5,1){\line(0,-1){2}}\put(0.5,-1){\line(1,0){0.5}}\put(1,-1){\line(0,1){1}}\put(1,0){\line(1,0){0.125}}\put(0.5,1.0625){\makebox(0,0)[b]{}}\end{picture}\\ \text{father function\quad } & & \text{mother function\quad}\\ \text{(a)\quad } & & \text{(b)\quad}\end{array } \label{eq0}\ ] ] capture in a glance the refinement identities the two functions are clearly orthogonal in the inner product of , and the two closed subspaces and generated by the respective integral translates satisfy where is the dyadic scaling operator .the factor is put in to make a unitary operator in the hilbert space .this version of haar s system naturally invites the question of what other pairs of functions and with corresponding orthogonal subspaces and there are such that the same invariance conditions ( [ eq2 ] ) hold .the invariance conditions hold if there are coefficients and such that the scaling identity is solved by the father function , called , and the mother function is given by a fundamental question is the converse one : give simple conditions on two sequences and which guarantee the existence of -solutions and which satisfy the orthogonality relations for the translates ( [ eq1 ] ) . how do we then get an orthogonal basis from this ? the identities for haar s functions and of ( [ eq0])(a ) and ( [ eq0])(b ) above make it clear that the answer lies in a similar tiling and matching game which is implicit in the more general identities ( [ eq3 ] ) and ( [ eq4 ] ) .clearly we might ask the same question for other scaling numbers , for example or in place of .actually a direct analogue of the visual interpretation from ( [ eq0 ] ) makes it clear that there are no nonzero locally integrable solutions to the simple variants of ( [ eq3]), there _ are _ nontrivial solutions to ( [ eq5 ] ) and ( [ eq6 ] ) , to be sure , but they are versions of the cantor devil s staircase functions , which are prototypes of functions which are not locally integrable .( 478,644 ) ( 0,526 ) ( 120,526 ) ( 240,526 ) ( 360,526 ) ( 0,406 ) ( 120,406 ) ( 298,344 ) ( 269,458)(0,0)[b] ( 388,479)(0,0)[b] ( 392,327)(0,0)[b](a ) : father function ( 0,199 ) ( 120,199 ) ( 240,199 ) ( 360,199 ) ( 0,79 ) ( 120,79 ) ( 298,17 ) ( 269,131)(0,0)[b] ( 409,152)(0,0)[b] ( 392,0)(0,0)[b](b ) : mother function since the haar example is based on the fitting of copies of a fixed box inside an expanded one , it would almost seem unlikely that the system ( [ eq3])([eq4 ] ) admits finite sequences and such that the corresponding solutions and are continuous or differentiable functions of compact support .the discovery in the mid-1980 s of compactly supported differentiable solutions , see , was paralleled by applications in seismology , acoustics , and optics , as discussed in , and once the solutions were found , other applications followed at a rapid pace : see , for example , the ten books in benedetto s review .it is the solution in ( [ eq4 ] ) that the fuss is about , the mother function ; the other one , , the father function , is only there before the birth of the wavelet .the most famous of them are named after daubechies , and look like the graphs in figure [ figdaubechies ] . with the multiresolution idea ,we arrive at the closed subspaces as noted in ( [ eq1])([eq2 ] ) , where is some scaling operator .there are extremely effective iterative algorithms for solving the scaling identity ( [ eq3 ] ) : see , for example , example 2.5.3 , pp. 124125 , of , , and , and figure [ figdaubechies ] .a key step in the algorithms involves a clever choice of the kind of resolution pictured in ( [ eq12 ] ) , but digitally encoded .the orthogonality relations can be encoded in the numbers and of ( [ eq3])([eq4 ] ) , and we arrive at the doubly indexed functions it is then not difficult to establish the combined orthogonality relations plus the fact that the functions in ( [ eq8 ] ) form an orthogonal basis for .this provides a painless representation of -functions where the coefficients are what is more significant is that the resolution structure of closed subspaces of facilitates powerful algorithms for the representation of the numbers in ( [ eq11 ] ) . amazingly , the two sets of numbers and which were used in ( [ eq3])([eq4 ] ) , and which produced the magic basis ( [ eq8 ] ) , the wavelets , are the same magic numbers which encode the quadrature mirror filters of signal processing of communications engineering . on the face of it , those signals from communication engineering really seem to be quite unrelated to the issues from wavelets the signals are just sequences , time is discrete , while wavelets concern and problems in mathematical analysis that are highly non - discrete .dual filters , or more generally , subband filters , were invented in engineering well before the wavelet craze in mathematics of recent decades .these dual filters in engineering have long been used in technology , even more generally than merely for the context of quadrature mirror filters ( qmf s ) , and it turns out that other popular dual wavelet bases for can be constructed from the more general filter systems ; but the best of the wavelet bases are the ones that yield the strongest form of orthogonality , which is ( [ eq9 ] ) , and they are the ones that come from the qmf s .the qmf s in turn are the ones that yield perfect reconstruction of signals that are passed through filters of the analysis - synthesis algorithms of signal processing .they are also the algorithms whose iteration corresponds to the resolution sytems ( [ eq12 ] ) from wavelet theory . while fourier invented his transform for the purpose of solving the heat equation , i.e. , the partial differential equation for heat conduction , the wavelet transform ( [ eq10])([eq11 ] ) does not diagonalize the differential operators in the same way .its effectiveness is more at the level of computation ; it turns integral operators into sparse matrices , i.e. , matrices which have many zeros in the off - diagonal entry slots .again , the resolution ( [ eq12 ] ) is key to how this matrix encoding is done in practice .the first wavelet was discovered by alfred haar long ago , but its use was limited since it was based on step - functions , and the step - functions jump from one step to the next .the implementation of haar s wavelet in the approximation problem for continuous functions was therefore rather bad , and for differentiable functions it is atrocious , and so haar s method was forgotten for many years .and yet it had in it the one idea which proved so powerful in the recent rebirth ( since the 1980 s ) of wavelet analysis : the idea of a _multiresolution_. you see it in its simplest form by noticing that a box function of ( [ eqhaarscaling ] ) may be scaled down by a half such that two copies and of the smaller box then fit precisely inside .see ( [ eqhaarscaling ] ) .{c}\setlength{\unitlength}{30bp}\begin{picture}(3,1.559)(-0.5,-0.309 ) \put(-0.5,0){\vector(1,0){3 } } \thicklines \put(-0.125,0){\line(1,0){2.25 } } \put(0,1){\line(1,0){2 } } \multiput(0,0)(1,0){3}{\line(0,1){1 } } \put(1,-0.0425){\makebox(0,0)[t]{ } } \put(0.3657,1.09375){\makebox(0,0)[bl]{ } } \put(1.3657,1.09375){\makebox(0,0)[bl]{ } } \put(0.5,0.5){\makebox(0,0){ } } \put(-0.0625,0.0625){\makebox(0,0)[br]{ } } \put(1.0625,0.0625){\makebox(0,0)[bl]{ } } \put(2.0625,0.0625){\makebox(0,0)[bl]{ } } \put(2.4375,-0.125){\makebox(0,0)[t]{ } } \end{picture}\end{array } \label{eqhaarscaling } \\\begin{array } [ c]{c}\setlength{\unitlength}{30bp}\begin{picture}(3,1.61)(-0.5,-0.79 ) \put(-0.5,0){\vector(1,0){3 } } \thicklines \put(-0.125,0){\line(1,0){0.125 } } \put(1,0){\line(1,0){1.125 } } \multiput(0,0)(0.5,0){2}{\line(0,1){1 } } \multiput(0,1)(0.5,-2){2}{\line(1,0){0.5 } } \multiput(0.5,-1)(0.5,0){2}{\line(0,1){1 } } \put(0.25,0.5){\makebox(0,0){ } } \put(0,-0.0625){\makebox(0,0)[t]{ } } \put(0.53125,-0.0625){\makebox(0,0)[tl]{ } } \put(1.0625,-0.0625){\makebox(0,0)[tl]{ } } \put(2.4375,-0.125){\makebox(0,0)[t]{ } } \end{picture}\end{array } \label{eqhaarwavelet}\ ] ] this process may be continued if you scale by powers of in both directions , i.e. , by for integral , .so for every , there is a finer resolution , and if you take an up- and a shifted mirror image down - version of the dyadic scaling as in ( [ eqhaarwavelet ] ) , and allow all linear combinations , you will notice that arbitrary functions on the line , with reasonable integrability properties , admit a representation where the summation is over all pairs of integers , with representing scaling and translation .the very simple idea of turning this construction into a multiresolution ( multi for the variety of scales in ( [ eqinttut.1 ] ) ) leads not only to an algorithm for the analysis / synthesis problem , in ( [ eqinttut.1 ] ) , but also to a construction of the single functions which solve the problem in ( [ eqinttut.1 ] ) , and which can be chosen differentiable , and yet with support contained in a fixed finite interval .these two features , the algorithm and the finite support ( called _ compact _ support ) , are crucial for computations : computers do algorithms , but they do not do infinite intervals well .computers do summations and algebra well , but they do not do integrals and differential equations , unless the calculus problems are discretized and turned into algorithms . in the discussion to follow , the multiresolution analysis viewpoint is dominant , which increases the role of algorithms ; for example , the so - called pyramid algorithm for analyzing signals , or shapes , using wavelets , is an outgrowth of multiresolutions . returning to ( [ eqhaarscaling ] ) and ( [ eqhaarwavelet ] ), we see that the scaling function itself may be expanded in the wavelet basis which is defined from , and we arrive at the infinite series which is pointwise convergent for .( it is a special case of the expansion ( [ eqinttut.1 ] ) when . ) in view of the picture ( ) below , ( [ eqinfiniteseries ] ) gives an alternative meaning to the traditional concept of a _ telescoping _ infinite sum . if , for example , , then the representation ( [ eqinfiniteseries ] ) yields , while for , . more generally , if , and , then so the function is itself in the space , and represents the _initial resolution_. the tail terms in ( [ eqinfiniteseries ] ) corresponding to represent the _coarser resolution_. the finite sum represents the _ missing detail _ of as a `` bump signal '' . while the sum on the left - hand side in ( [ eqtail ] ) is _ infinite _ , i.e. , the summation index is in the range , the expression on the right - hand side is merely a coarser scaled version of the original function from the subspace which specifies the initial resolution .infinite sums are _ analysis problems _ while a scale operation is a single simple _algorithmic step_. andso we have encountered a first ( easy ) instance of the magic of a resolution algorithm ; i.e. , an instance of a transcendental step ( the analysis problem ) which is converted into a programmable operation , here the operation of scaling .( other more powerful uses of the scaling operation may be found in the recent book by yves meyer , especially ch . 5 , and .) the sketch below allows you to visualize more clearly this resolution versus detail concept which is so central to the wavelet algorithms , also for general wavelets which otherwise may be computationally more difficult than the haar wavelet .{haarbump.eps } } \put(98,146){ } \put(104,39){ } \put(208,56){ } \put(312,65){ } \put(308,102){ } \put(413,73){\includegraphics[bb=0 0 36 46,width=18bp]{eye.eps } } \end{picture } \\ \text{\footnotesize the wavelet\index{wavelet!decomposition@--- decomposition } decomposition of haar\index{haar1!scaling function@--- scaling function } 's bump function in ( \ref{eqhaarscaling } ) and ( \ref{eqinfiniteseries})}\end{gathered}\ ] ] using the sketch we see for example that the simple step function {haardeco.eps } } \put(14,87){ } \put(96,49){ } \end{picture } \nonumber\end{gathered}\ ] ] has the wavelet decomposition into a sum of a _ coarser resolution _ and an _ intermediate detail _ as follows : thus the details are measured as differences .this is a general feature that is valid for other functions and other wavelet resolutions .see , for instance , [ exe1sep7 - 4 ] below .while the haar wavelet is built from flat pieces , and the orthogonality properties amount to a visual tiling of the graphs of the two functions and , this is not so for the daubechies wavelet nor the other compactly supported smooth wavelets . by the balian low theorem , a time - frequency wavelet can not be simultaneously localized in the two dual variables : if is a time - frequency gabor wavelet , then the two quantities and can not both be finite . since , this amounts to poor differentiability properties of well - localized gabor wavelets , i.e. , wavelets built using the two operations translation and frequency modulation over a lattice . but with the multiresolution viewpoint, we can understand the first of daubechies s scaling functions as a one - sided differentiable solution to where the four real coefficients satisfy the system ( [ eq1.3.2 ] ) is easily solved: and daubechies showed that ( [ eq1.3.1 ] ) has a solution which is supported in the interval , \displaystyle \left ( \begin{array}{c|c|c|c|c } \begin{matrix } a_{1}\\ b_{1 } \end{matrix } & a_{1 } & 0 & 0 & \begin{matrix } a_{0}\\ b_{0 } \end{matrix } \\\hline \begin{matrix } 0\\ 0 \end{matrix } & a_{0 } & a_{1 } & 0 & \begin{matrix } 0\\ 0 \end{matrix } \\\hline \begin{matrix } 0\\ 0 \end{matrix } & 0 & a_{0 } & a_{1 } & \begin{matrix } 0\\ 0 \end{matrix } \\\hline \begin{matrix } a_{3}\\ b_{3 } \end{matrix } & 0 & 0 & a_{0 } & \begin{matrix } a_{2}\\ b_{2 } \end{matrix } \end{array } \right ) a_{0}a_{1} \ell\sp 2 l\sp 2\left ( \mathbb{r}\right ) ] \ { 0 } ^2 ( ) {0pt}{6pt } & & \uparrow| } & \multicolumn{1}{c}{\uparrow| } & \multicolumn{1}{c}{\uparrow|w } & & \\ \rule[-6pt]{0pt}{6pt}s_{0}^{2}\ell^{2}\nearrow\vphantom{\raisebox{2pt}{}}s_{0}\ell^{2}\nearrow \vphantom{\raisebox{2pt}{}} ] ^2 - 6pt the haar wavelet is supported in , and if and , then the modified function is supported in the smaller interval . when is fixed , these intervals are contained in for .this is not the case for the other wavelet functions .for one thing , the non - haar wavelets have support intervals of length more than one , and this forces periodicity considerations ; see .for this reason , coifman and wickerhauser invented the concept of wavelet packets .they are built from functions with prescribed smoothness , and yet they have localization properties that rival those of the ( discontinuous ) haar wavelet .there are powerful but nontrivial theorems on restriction algorithms for wavelets from to .we refer the reader to and for the details of this construction .the underlying idea of alfred haar has found a recent renaissance in the work of wickerhauser on _ wavelet packets_. the idea there , which is also motivated by the walsh function algorithm , is to replace the refinement equation ( [ eqint.1b ] ) by a related recursive system as follows : let , , for example , , be a given low - pass / high - pass system , .then consider the following _ refinement system _ on : clearly the function can be identified with the traditional scaling function of ( [ eqint.1 ] ) .a theorem of coifman and wickerhauser ( theorem 8.1 , ) states that if is a partition of into subsets of the form then the function system is an orthonormal basis for .although it is not spelled out in , this construction of bases in divides itself into the two cases , the true orthonormal basis ( onb ) , and the weaker property of forming a function system which is only a tight frame . as in the wavelet case , to get the -system to really be an onb for , we must assume the transfer operator to have _ perron frobenius spectrum _ on .this means that the intersection of the point spectrum of with is the singleton , and that [ s2.2.2]subdivision algorithms + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the algorithms for wavelets and wavelet packets involve the pyramid idea as well as subdivision .each subdivision produces a multiplication of subdivision points .if the scaling is by , then subdivisions multiply the number of subdivision points by .if the scaling is by a integral matrix , then the multiplicative factor is in the number of subdivision points placed in .if is a continuous function on , the _ transfer operator _ or _ kneading operator _ with the alias in the fourier transformed space , has an adjoint which is the _ subdivision operator _ or chopping operator on functions on , with the alias on sequences .we will analyze the duality between and and their spectra .specializing to , we note that is then the transfer operator of orthogonal type wavelets . in the following , assumed only to satisfy and .other conditions are discussed in . in the engineering terminology of [ exe1sep7 - 4 ] , the operation ( [ eqk&c.2 ] ) is composed of a local filter with the numbers as coefficients , followed by the down - sampling , while ( [ eqk&c.4 ] ) is composed of up - sampling , followed by an application of a dual filter . in signal processing , is referred to as `` decimation '' even if is not .the operator ( ) is called the subdivision operator , or the _ woodcutter operator _ , because of its use in computer graphics .iterations of will generate a shape which ( in the case of one real dimension ) takes the form of the graph of a function on . if is given , and if the differences are small , for example if then we say that represents _ control points _ , or a control polygon , and the function is the limit of the _ subdivision scheme_. it follows that the subdivision operator on the sequence spaces , especially on , governs _pointwise approximation _ to refinable limit functions .the dual version of , i.e. , (= the transfer operator ) governs the corresponding _mean approximation _ problem , i.e. , approximation relative to the -norm . in scholium 4.1.2 of , we consider the eigenvalue problem and in some suitably defined space of sequences .the formula ( [ eqspbmay14.poundbis ] ) for the limit of a given subdivision scheme makes it clear that the case ( [ eqspbmay14.poundpound ] ) must be excluded . forif ( [ eqspbmay14.poundpound ] ) holds , for some , and some sequence of control points , then there is not a corresponding regular function on with its values given on the finer grids , , by we show in example 4.1.3 of that there are no such control points in .hence the stability of the algorithm !the main difference between the algorithms of wavelets and those of wavelet packets is that for the wavelets the path in the pyramid is to one side only : a given resolution is split into a coarser one and the intermediate detail .the intermediate detail may further be broken down into frequency bands . with the operators acting on , the coarser subspace after steps is modelled on , and the projection onto this subspace is where is the isometry of defined by the low - pass filter .but in the construction of the wavelet packet , the subspace resulting by running the algorithm times is , and the projection onto this subspace is if , the wavelet function is computed from the iteration corresponding to the representation where are unique from the euclidean algorithm . the discussion centers around the matrix functions .[ exe6dauswe-11 ] * the case . *recall that we call a finite sum , a fourier polynomial both if the coefficients are numbers , and if they are matrices .the matrix - valued fourier polynomials such that form a subgroup of which we denote . for every in there are , , and scalar - valued fourier polynomials such that see .this is the first step in the daubechies sweldens lifting algorithm for the discrete wavelet transform .thus the case gives a constructive lifting algorithm for wavelets , and such an algorithm has not been established in the case . the decomposition could also be compared with , which was mentioned in connection with the proof of .[ exe6dec3-last ] recall the correspondence between matrix functions and wavelet filters : if is a matrix function , then the corresponding dyadic wavelet filters are it follows that the two matrix functions and satisfy for some in the ring of fourier polynomials if and only if and .similarly note that the two matrix functions and satisfy for some if and only if and .* the conclusion is that the wavelet algorithm for a general wavelet filter corresponding to a matrix function , say , may be broken down in a sequence of zig - zag steps acting alternately on the high - pass and the low - pass signal components .we mentioned that for matrix functions corresponding to finite impulse response ( fir ) filters which are unitary , we need only the constant matrix ( which is chosen such as to achieve the high - pass and low - pass conditions ) and factors of the form{c|c}z & 0\\\hline 0 & 1 \end{array } _ { \mathstrut}^{\mathstrut } \right)\ ] ] where is a rank - one projection in and is the scaling number of the subdivision .unfortunately , no such factorization theorem is available for the non - unitary fir filters .but the matrix functions take values in the non - singular complex matrices .the sweldens daubechies factorization and the lifting algorithm serve as a substitute .there are still the general non - unimodular fir - matrix functions where factorizations are so far a bit of a mystery .the matrix functions are called _ polyphase matrices _ in the engineering literature .the following summary serves as a classification theorem for the orthogonal wavelets of compact support : the wavelets correspond to fir polyphase matrices which are unitary . 1 .[ genalg(1)]pick one - dimensional orthogonal projections in and define the unitary - valued matrix function on by where{cc}1 & 1\\ 1 & -1 \end{array } \right ) .\label{eqgen.14}\ ] ] then each has the form{cc}\lambda_{j } & \sqrt{\lambda_{j}\left ( 1-\lambda_{j}\right ) } e^{i\theta_{j}}\\ \sqrt{\lambda_{j}\left ( 1-\lambda_{j}\right ) } e^{-i\theta_{j } } & 1-\lambda_{j}\end{array } \right ) , \label{eqgen.15}\ ] ] where ] .[ genalg(6)]all other wavelet functions with compact support can be obtained from the ones in by integer translation .[ [ bio ] ] [ s2.2bis.1]the case of polynomial functions [ the polyphase matrix , joint work with ola bratteli ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + one problem occurring in the biorthogonal context which does not have an analogue in the orthogonal setting stems from the fact that the duality relations do not give any absolute restrictions on the size of and , e.g. , a bound on the inner product of two vectors in does not give a bound on the size of the vectors if they are not equal .this is reflected in the bi - cuntz relations defined by , .let us now define for , . instead of the usual cuntz relations , the , now satisfy if are the matrix - valued functions associated to and by we compute in the sense that is contained in the commutative algebra of multiplication operators on defined by , and .correspondingly, so all the operators , are contained in the abelian algebra . we may introduce operators , from into by and then maps into ( [ eqbio.21 ] ) , etc . , and the relations ( [ eqbio.17])([eqbio.20 ] ) take the form{ll}s^{\ast}\tilde{s}=1 , & \text{where } 1\text { is the identity in } m_{n}\left ( \mathbb{c}\right ) \otimes c\left ( \mathbb{t}\right ) , \\s\tilde{s}^{\ast}=1 , & \text{where } 1\text { is the identity in } c\left ( \mathbb{t}\right ) , \end{array } \right .\label{eqbio.23}\\ & \left\ { \begin{array } [ c]{l}s^{\ast}s = aa^{\ast},\\ \tilde{s}^{\ast}\tilde{s}=\tilde{a}\tilde{a}^{\ast}. \end{array } \right .\label{eqbio.24}\ ] ] these relations say that all combinations of products of and with and lie in the algebra .but in addition and are matrix - valued functions on , so and hence and all the matrix - valued functions commute .[ schbio.1]given _ any _ bijective operator from into one may define and the bi - cuntz relations are satisfied .if , more specifically , is given by and , then operators exist such that the bi - cuntz relations are satisfied if and only if the operator defined by ( [ eqbio.10 ] ) is invertible , in which case one must use , , and to define . let us now connect the filters to the wavelets .we have already defined the scaling functions , and wavelet functions , , .the expansions for and converge uniformly on compacts , thus and are continuous functions on . to decide that these functions are in one again forms and similarly , and one deduces again from the nonlinear intertwining relation that in the standard case of the good old orthogonal wavelets in of subbands, we will look for functions in such that , if and run independently over all the integers , i.e. , , then the countably infinite system of functions is an _orthonormal basis _ in the hilbert space .the second half of the word orthonormal refers to the restricting requirement that all the functions satisfy or stated more briefly , or yet more briefly , from familiar properties of the lebesgue measure on , it then follows that all the functions satisfy the normalization , i.e. , that the functions ( [ eqchexatut.3 ] ) are said to be _ orthogonal _if whenever .we say that the two triple indices are different if or or . if , for example , and , then when the same function is translated by different amounts and , the two resulting functions are required to be orthogonal .it is an elementary geometric fact from the theory of hilbert space that if the functions in ( [ eqchexatut.3 ] ) form an orthonormal basis , then for every function , i.e. , every measurable function on such that we have the identity where the triple summation in ( [ eqchexatut.6 ] ) is over all configurations , .it is convenient to rewrite ( [ eqchexatut.6 ] ) in the following more compact form: surprisingly , it turns out that ( [ eqchexatut.7 ] ) may hold even if the functions of ( [ eqchexatut.3 ] ) do not form an orthonormal basis .it may happen that one of the initial functions , , or satisfies , and yet that ( [ eqchexatut.7 ] ) holds for all . these more general systems are still called wavelets , but since they are special , they are referred to as _ tight frames _ , as opposed to orthonormal bases . in either case , we will talk about a _ wavelet expansion _ of the form it follows that the sum on the right - hand side in ( [ eqchexatut.8 ] ) converges in the norm of for all functions in if ( [ eqchexatut.7 ] ) holds .but there is a yet more general form of wavelets , called _biorthogonal_. the conditions on the functions are then much less restrictive than the orthogonality axioms . hence these wavelets are more flexible and adapt better to a variety of applications , for example , to data compression , or to computer graphics .but the biorthogonality conditions are also a little more technical to state .we say that some given functions , , in are part of a biorthogonal wavelet system if there is a second system of functions , , in , such that every admits a representation and in the standard normalized case where , then you will notice that condition ( [ eqchexatut.7 ] ) turns into for all .the orthogonal wavelets correspond to matrix functions , while the wider class of biorthogonal wavelets corresponds to the much bigger group of matrix functions , via the associated wavelet filters .you may ask , why bother with the more technical - looking biorthogonal systems ?it turns out that they are forced on us by the engineers .they tell us that the real world is not nearly as orthogonal as the mathematicians would like to make it out to be .there is a paucity of symmetric orthogonal wavelets , and symmetry ( `` linear phase '' ) is prized by engineers and workers in image processing , where the more general wavelet families and their duality play a crucial role . nowwhat if we could change the biorthogonal wavelets into the orthogonal ones , and still keep the essential spectral properties intact ?then everyone will be happy .this last chapter shows that it is possible , and even in a fairly algorithmic fashion , one that is amenable to computations .wavelet filters may be understood as matrix functions , i.e. , functions from the one - torus into some group of invertible matrices . if the scale number is , then there are three such matrix groups which are especially relevant for wavelet analysis : {\parbox[t]{0.22\textwidth}{\raggedright : all unitary complex matrices\index{matrix!unitary } } } \subset \framebox[0.24\textwidth]{\parbox{0.22\textwidth}{\raggedright : all invertible complex matrices\index{matrix!invertible } } } \supset \framebox[0.24\textwidth]{\parbox[t]{0.22\textwidth}{\raggedright : all complex matrices with .}}\ ] ] it is possible to reduce some questions in the case to better understood results for ; see chapter 6 of .the case is especially interesting in view of daubechies sweldens lifting for dyadic wavelets ; see [ exe6dauswe-11 ] .* definitions : * a function , or a distribution , satisfying ( [ eqint.1 ] ) is said to be _refinable _ , the equation ( [ eqint.1 ] ) is called the _ refinement equation _ , or also , as noted above , the `` scaling identity '' , and is called the scaling function .the coefficients of ( [ eqint.1 ] ) are called the _ masking coefficients_. we will mainly concentrate on the case when the set is finite .but in general , a function is said to be refinable with scale number if is in the -closed linear span of the translates ; see , e.g. , . since there are refinement operations which are more general than scaling see for example , there are variations of which are correspondingly more general , with regard to both the refinement steps that are used and the dimension of the spaces . the term `` scaling identity '' is usually , but not always , reserved for , while more general refinements lead to `` refinement equations '' .however , often goes under both names .the vector versions of the identities get the prefix `` multi- '' , for example _ multiscaling _ and _ multiwavelet_. if satisfies a condition for obtaining orthogonal wavelets , together with the normalization then ( [ eqint.1 ] ) has a solution in which can be obtained by taking the inverse fourier transform of the product expansion ( here and later we use the convention that if is a function of , then . ) that ( [ eqintnew.6 ] ) gives a solution of ( [ eqint.1 ] ) follows from the relation we mentioned that there is a direct connection between and the scaling function on given in ( [ eqintnew.2 ] ) , ( [ eqint.1 ] ) , and ( [ eqintnew.6 ] ) .there is a similar correspondence between the high - pass filters and the wavelet generators . in the _ biorthogonal _ case, there is a second system and the two systems then form a dual wavelet basis , or dual wavelet frame for in the sense of , chapter 5 .we considered this biorthogonal case in more detail in [ bio ] above .much more detail can be found in chapter 6 of .the idea of constructing maximally smooth wavelets when some side conditions are specified has been central to much of the activity in wavelet analysis and its applications since the mid-1980 s . as a supplement to ,the survey article is enjoyable reading .the paper treats the issue in a more specialized setting and is focussed on the moment method .some of the early applications to data compression and image coding are done very nicely in , , and .an interesting , related but different , algebraic and geometric approach to the problem is offered in .we now turn to an interesting variation of this setup , which includes higher dimensions , i.e. , when the hilbert space is , . staying for the moment with , and fixed, we will take the viewpoint of what is called _ resolutions _ , but here understood in a broad sense of closed subspaces : a closed linear subspace is said to be an -resolution if it is invariant under the unitary operator i.e. , if maps into a proper subspace of itself .the subspace is said to be _ translation invariant _if if there is a function such that is the closed linear span of then clearly is translation invariant . the translation - invariant resolution subspaces actively studied and reasonably well understood . if is of the form in ( [ eqintoct16.8 ] ) , then we say that it is _ singly generated _ , and that is a scaling function of scale .[ [ s2.2bis.3.2generalized-multiresolutions-joint-work-with-l.baggett-k.merrill-and-j.packer ] ] [ s2.2bis.3.2]generalized multiresolutions [ joint work with l. baggett , k. merrill , and j. packer ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the case when the resolution subspace is not singly generated is also interesting , and these resolution subspaces are frequently called _ generalized multiresolution subspaces _ ( gmra ) .there is much current and very active research on them ; see , for example , , , , , , , and .the case when is not singly generated as a resolution subspace of scale , i.e. , when is not of the form ( [ eqintoct16.8 ] ) , occurs in the study of _ wavelet sets_. a wavelet set in is defined relative to an expansive matrix over .a subset is said to be an -wavelet set if there is a single wavelet function such that .specifically , the condition states that the family is an orthonormal basis for .this can be checked to be equivalent to the combined set of two tiling properties for as a subset of : we define tiling by the requirement that the sets in the family have overlap at most of measure zero relative to lebesgue measure on .similarly , the union is understood to be only up to measure zero .it is easy to see that compactly supported wavelets in are mra wavelets , while most wavelets from wavelet sets are not .thess wavelets are typically ( but not always ) frequency localized .the main difference between the gmra ( stands for generalized multiresolution analysis ) wavelets and the more traditional mra ones may be understood in terms of multiplicity . both come from a fixed resolution subspace which is invariant under the translations where hence is a unitary representation of on the hilbert space . as a result of stone s theorem, we find that there are subsets of such that the spectral measure of the ( restricted ) representation has multiplicity on the subset , .it can be checked that the projection - valued spectral measure is absolutely continuous . moreover, there is an intertwining unitary operator such that holds for all and .we may then consider the functions ( ) defined by it was proved by baggett and merrill that generates a normalized tight frame for : specifically , that holds for all .treating as a vector - valued function , denoted simply by , we see that there is a matrix function such that[0pt]{\raisebox{5pt}[0pt][0pt]{\small for transpose}}}}t\right ) = h\left ( e^{it}\right ) \hat{\varphi}\left ( t\right ) , \label{eq2.2bis.3.7bis}\ ] ] where , and . but this method takes the hilbert space as its starting point , and then proceeds to the construction of wavelet filters in the form ( [ eq2.2bis.3.7bis ] ) .our current joint work with baggett , merrill , and packer reverses this .it begins with a matrix function defined on , and then offers subband conditions on the matrix function which allow the construction of a gmra for with generator given by ( [ eq2.2bis.3.7bis ] ) .so the hilbert space shows up only at the end of the construction , in the conclusions of the theorems . in using the polyphase matrices , one may only have the first few rows , and be faced with the problem of completing to get the entire function from a torus into the matrices of the desired size .the case when only the first row is given , say corresponding to a specified low - pass filter , is treated in and , and we refer the reader to the references given there , especially , , , and .the wavelet transfer operator is used in a variety of wavelet applications not covered here , or only touched upon tangentially : stability of refinable functions , regularity , approximation order , unitary matrix extension principles , to mention only a few .the reader is referred to the following papers for more details on these subjects : , , , , , , , , , , , , and . for the sake of illustration ,let us take , and scaling number , i.e. , the case of dyadic framelets .naturally , the notion of tight frame is weaker than that of an orthonormal basis ( onb ) , and it is shown in that when a system of wavelet filters , is given ( must be low - pass ) , then the orthogonality condition on the s always gets us a framelet in , i.e. , the functions corresponding to the high - pass filters , , generate a tight frame for , also called a framelet .the correspondence to is called the uep in .the orthogonality condition for , , referred to in the uep is simply this : form an -by- matrix - valued function by using , in the first column , and the translates of the s by a half period , i.e. , , in the second .the condition on this matrix function is that the two columns are orthogonal and have unit norm in for all .note that we still get the unitary matrix functions acting on these systems , in the way we outlined above .but there is redundancy as the unitary matrices are -by- .the reader is refered to for further details .we emphasize that several of these , and other related topics , invite the kind of probabilistic tools that we have stressed here . but a more systematic discussion is outside the scope of this brief set of notes .we only hope to offer a modest introduction to a variety of more specialized topics .[ snew5aug2.3.4(a ) ] the orthogonality condition for , , may be stated in terms of the operators from equation ( [ eq2.9 ] ) , . for each , define an operator on as in ( [ eq2.9 ] ) .then the arguments from section [ s2 ] show that the orthogonality condition for , , i.e. , the uep condition , is equivalent to the operator identity ( [ eq2.8 ] ) where the summation now runs from to .operator systems satisfying ( [ eq2.8 ] ) are called row - isometries .[ snew5aug2.3.4(b ) ] there are two properties of the low - pass filter which we have glossed over .first , must be such that the corresponding scaling function is in . without an added condition on , might only be a distribution .secondly , when the dyadic scaling in is restricted to the resolution subspace , the corresponding unitary part must be zero .these two issues are addressed in , , and .the two groups of matrix functions and , i.e. , the continuous functions from the torus into the respective groups , enter wavelet analysis via the associated wavelet filters . 1 . [ intmultcorr(1)]matrix functions , , 2 .[ intmultcorr(2)]high- and low - pass wavelet filters , , , and 3 .[ intmultcorr(3)]wavelet generators , , , together with scaling functions , . in particular, { \frac{1}{n}\sum_{w^{n}=z}m_{i}\left ( w\right ) w^{-j } , } & & z\in\mathbb{t},\qquad\label{eqintfeb22.11}\\ \qquad\left ( a^{-1}\right ) _ { i , j } & = \frac{1}{n}\sum_{w^{n}=z}\overline{\tilde{m}_{j}\left ( w\right ) } \,w^{i } , & & z\in\mathbb{t}.\qquad\label{eqintfeb22.12}\ ] ] the dependence of the -functions in ( [ intmultcorr(3 ) ] ) on the group elements from ( [ intmultcorr(1 ) ] ) gives rise to homotopy properties .the standard orthogonal wavelets represent the special case when , or equivalently , , .hence , the matrix functions are unitary in this case .the scaling / wavelet functions with support on a fixed compact interval , say $ ] , , can be parameterized with a finite number of parameters since the unitary - valued function in ( [ eqintfeb22.11 ] ) then is a polynomial in of degree at most .it is well - known folklore from computer - generated pictures that the shape of the scaling / wavelet functions depends continuously on these parameters ; see figures 1.11.7 in and .the scaling function of ( [ eqint.1 ] ) is illustrated there , in the case , and for orthogonal -translates , i.e. , the case ( [ eqintnew.4 ] ) .these pictures illustrate the dependence of on the masking coefficients in the case of : where these formulas arise from an independent pair of rotations by angles and of two `` spin vectors '' , i.e. , by taking the matrix function in ( [ eqintfeb22.11 ] ) unitary , , and setting with{cc}1 & 1\\ 1 & -1 \end{array } \right ) , \label{eqa.8bis}\\[3\jot ] q_{\theta}&=\left ( \begin{array } [ c]{cc}\cos^{2}\theta & \cos\theta\sin\theta\\ \cos\theta\sin\theta & \sin^{2}\theta \end{array } \right ) \nonumber \\ & = \frac{1}{2}\left ( \left ( \begin{array } [ c]{cc}1 & 0\\ 0 & 1 \end{array } \right ) + \left ( \begin{array } [ c]{cc}\cos2\theta & \sin2\theta\\ \sin2\theta & -\cos2\theta \end{array }\right ) \right ) , \label{eqa.9}\ ] ] and the orthogonal complement to the one - dimensional projection ,{cc}1 & 0\\ 0 & 1 \end{array } \right ) + \left ( \begin{array } [ c]{cc}\cos2\theta & \sin2\theta\\ \sin2\theta & -\cos2\theta \end{array }\right ) \right ) , } \label{eqa.10}\ ] ] with the coefficients , , , , , given by ( [ eqa.11 ] ) , the algorithmic approach to graphing the solution to the scaling identity ( [ eqint.1 ] ) is as follows ( see , for details ) : the relation ( [ eqint.1 ] ) for is interpreted as giving the values of the left - hand by an operation performed on those of the on the right , and a binary digit inversion transforms this into the form where is the matrix constructed from the coefficients in ( [ eqint.1 ] ) , and and are the vector functions the signal processing aspect can be understood from the description of subband filters in the analusis and synthesis of time signals , or more general signals for images . in either case , we have two subband systems and where the functions are the generating functions defined from the filter coefficients and , .originally we had anticipated adding two more chapters to these tutorials , but time and space prevented this .instead we include the table of contents for this additional material .the details for the remaining chapters will be published elsewhere .but as the items in the list of contents suggest , there are still many exciting open problems in the subject that the reader may wish to pursue on his / her own .we feel that the following list of topics offers at least an outline of several directions that the reader , could take in his / her own study and research on wavelet - related mathematics .[ [ s3.2intertwining - operators - between - sequence - spaces - l2-and - l2rn ] ] [ s3.2]intertwining operators between sequence spaces and ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [ [ s3.4dependence - of - the - wavelet - functions - on - the - matrix - functions - which - define - the - wavelet - filters ] ] [ s3.4]dependence of the wavelet functions on the matrix functions which define the wavelet filters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ several versions of spectral duality are presented .on the two sides we present ( 1 ) a basis condition , with the basis functions indexed by a frequency variable , and giving an orthonormal basis ; and ( 2 ) a geometric notion which takes the form of a tiling , or a iterated function system ( ifs ) .our initial motivation derives from the fuglede conjecture , see : for a subset of of finite positive measure , the hilbert space admits an orthonormal basis of complex exponentials , i.e. , admits a fourier basis with some frequencies from , if and only if tiles ( in the measurable category ) where the tiling uses only a set of vectors in .if some has a fourier basis indexed by a set , we say that is a spectral pair .we recall from that if is an -cube , then the sets in ( 1 ) are precisely the sets in ( 2 ) .this begins with work of jorgensen and steen pedersen where the admissible sets are characterized .later it was shown , and that the identity holds for all .the proofs are based on general fourier duality , but they do not reveal the nature of this common set .a complete list is known only for , , and , see .we then turn to the scaling ifs s built from the -cube with a given expansive integral matrix .each gives rise to a fractal in the small , and a dual discrete iteration in the large . in a different paper , jorgensen and pedersencharacterize those ifs fractal limits which admit fourier duality .the surprise is that there is a rich class of fractals that do have fourier duality , but the middle third cantor set does not .we say that an affine ifs , built on affine maps in defined by a given expansive integral matrix and a finite set of translation vectors , admits fourier duality if the set of points , arising from the iteration of the -affine maps in the large , forms an orthonormal fourier basis ( onb ) for the corresponding fractal in the small , i.e. , for the iteration limit built using the inverse contractive maps , i.e. , iterations of the dual affine system on the inverse matrix . by `` fractal in the small '', we mean the hutchinson measure and its compact support , see .( the best known example of this is the middle - third cantor set , and the measure whose distribution function is corresponding devil s staircase . ) in other words , the condition is that the complex exponentials indexed by form an onb for .such duality systems are indexed by complex hadamard matrices , see and ; and the duality issue is connected to the spectral theory of an associated ruelle transfer operator , see .these matrices are the same hadamard matrices which index a certain family of quasiperiodic spectral pairs studied in and . they also are used in a recent construction of terence tao of a euclidean spectral pair in for which does not a tile with any set of translation vectors in ; see also .we finally report on joint research with dorin dutkay , , , where we show that all the affine ifs s , and more general limit systems from dynamics and probability theory , admit wavelet constructions , i.e. , admit orthonormal bases of wavelet functions in hilbert spaces which are constructed directly from the geometric data .a substantial part of the picture involves the construction of limit sets and limit measures , a part of geometric measure theory .* acknowledgements : * [ ack]we are happy to thank the organizing committee at the national university of singapore for all their dedicated work in planning and organizing a successful conference , of which this tutorial is a part .we especially thank professors wai shing tang , judith packer , zuowei shen , and the head of the department of mathematics of the nus for all their work in making my visit to singapore possible .we thank the institute for mathematical sciences at the nus , and the us national science foundation , for partial financial support in the preparation of these lecture notes .we discussed various parts of the mathematics with our colleagues , professors larry baggett , david larson , ola bratteli , kathy merrill , judy packer , and we thank them for their encouragements and suggestions .the typesetting and graphics were expertly done at the university of iowa by brian treadway .we also thank brian treadway for a number of corrections , and for very helpful suggestions , leading to improvements of the presentation .baggett , p.e.t .jorgensen , k.d .merrill , and j.a .packer , _ construction of parseval wavelets from redundant filter systems _ , preprint , 2003 , submitted to j. amer ., arxiv : math.ca/0405301 .baggett , p.e.t .jorgensen , k.d .merrill , and j.a .packer , _ an analogue of bratteli - jorgensen loop group actions for gmra s _ , in wavelets , frames , and operator theory ( college park , md , 2003 ) , ed . c. heil , p.e.t .jorgensen , and d.r .larson , contemp .345 , american mathematical society , providence , 2004 , pp . 1125 .baggett and d.r .larson ( eds . ) , _ the functional and harmonic analysis of wavelets and frames : proceedings of the ams special session on the functional and harmonic analysis of wavelets held in san antonio , tx , january 1314 , 1999 _ , contemp .247 , american mathematical society , providence , 1999 .baggett and k.d .merrill , _ abstract harmonic analysis and wavelets in _ , in the functional and harmonic analysis of wavelets and frames ( san antonio , 1999 ) , ed .baggett and d.r .larson , contemp .247 , american mathematical society , providence , 1999 , pp . 1727 .o. bratteli and p.e.t .jorgensen , _ convergence of the cascade algorithm at irregular scaling functions _, in the functional and harmonic analysis of wavelets and frames ( san antonio , 1999 ) , ed .baggett and d.r .larson , contemp .247 , american mathematical society , providence , 1999 , pp . 93130 .to3em , _ wavelet filters and infinite - dimensional unitary groups _ , in wavelet analysis and applications ( guangzhou , china , 1999 ) , ed .d. deng , d. huang , r .- q .jia , w. lin , and j. wang , ams / ip studies in advanced mathematics , vol .25 , american mathematical society , providence , international press , boston , 2002 , pp . 3565 .coifman and m.v .wickerhauser , _ wavelets and adapted waveform analysis : a toolkit for signal processing and numerical analysis _ , in different perspectives on wavelets ( san antonio , tx , 1993 ) ,i. daubechies , proc .47 , american mathematical society , providence , 1993 , pp . 119153 .d. esteban and c. galand , _ application of quadrature mirror filters to split band voice coding systems _ , in ieee international conference on acoustics , speech , and signal processing ( washington , dc , may 1977 ) , institute of electrical and electronics engineers , piscataway , nj , 1977 , pp .191195 .a. fijany and c.p .williams , _ quantum wavelet transforms : fast algorithms and complete circuits _ , in quantum computing and quantum communications ( palm springs , ca , 1998 ) , ed .williams , lecture notes in computer science , vol .1509 , springer , berlin , 1999 , pp .freedman , _ poly - locality in quantum computing _ , found .comput . math .* 2 * ( 2002 ) , 145154 .b. fuglede , _ commuting self - adjoint partial differential operators and a group - theoretic problem _ , j. funct .16 * ( 1974 ) , 101121 .d. han , d. r. larson , m. papadakis , and th .stavropoulos , _ multiresolution analyses of abstract hilbert spaces and wandering subspaces _ , in the functional and harmonic analysis of wavelets and frames ( san antonio , 1999 ) , ed . l.w .baggett and d.r .larson , contemp .247 , american mathematical society , providence , 1999 , pp . 259284 .heller , j.m .shapiro , and r.o .wells , _ optimally smooth symmetric quadrature mirror filters for image coding _ , in wavelet applications ii ( orlando , 1995 ) , ed .szu , proceedings of spie , vol .2491 , society of photo - optical instrumentation engineers , bellingham , wa , 1995 , pp .119130 .heller , v. strela , g. strang , p. topiwala , c. heil , and l.s .hills , _ multiwavelet filter banks for data compression _ , in ieee international symposium on circuits and systems , 1995 ( iscas 95 ) , institute of electrical and electronics engineers , new york , 1995 , vol . 3 , pp .17961799 .to3em , invited featured book review of _ an introduction to wavelet analysis _ by david f. walnut , applied and numerical harmonic analysis , birkhuser , 2002 , bull .( n.s . ) * 40 * ( 2003 ) , 421427 .a. klappenecker , _ wavelets and wavelet packets on quantum computers _ , in wavelet applications in signal and image processing vii ( denver , 1999 ) , ed .unser , a. aldroubi , and a.f .laine , proceedings of spie , vol .3813 , society of photo - optical instrumentation engineers , bellingham , wa , 1999 , pp .703713 .m. lang and p.n .heller , _ the design of maximally smooth wavelets _ , in ieee international conference on acoustics , speech , and signal processing , 1996 ( icassp-96 ) , institute of electrical and electronics engineers , new york , 1996 , vol . 3 , pp .14631466 .to3em , _ wavelets and functions with bounded variation from image processing to pure mathematics _ , atti accad .lincei cl .fis . mat .lincei ( 9 ) mat .* 11 * ( 2000 ) , 77105 , special issue : _ mathematics towards the third millennium _( papers from the international conference held in rome , may 2729 , 1999 ) .v. perrier and m.v .wickerhauser , _ multiplication of short wavelet series using connection coefficients _ , in advances in wavelets ( hong kong , 1997 ) , ed .lau , springer - verlag , singapore , 1999 , pp .77101 .riemenschneider and z. shen , _ box splines , cardinal series , and wavelets _ , in approximation theory and functional analysis ( college station , texas , 1990 ) , ed .chui , academic press , boston , 1991 , pp .133149 .g. strang , v. strela , and d .- x .zhou , _ compactly supported refinable functions with infinite masks _ , in the functional and harmonic analysis of wavelets and frames ( san antonio , 1999 ) , ed .baggett and d.r .larson , contemp .247 , american mathematical society , providence , 1999 , pp . 285296 .wickerhauser , _ best - adapted wavelet packet bases _ , in different perspectives on wavelets ( san antonio , tx , 1993 ) , ed .i. daubechies , proc .47 , american mathematical society , providence , 1993 , pp . 155171 | [ abs]abstract : some connections between operator theory and wavelet analysis : since the mid eighties , it has become clear that key tools in wavelet analysis rely crucially on operator theory . while isolated variations of wavelets , and wavelet constructions had previously been known , since haar in 1910 , it was the advent of multiresolutions , and sub - band filtering techniques which provided the tools for our ability to now easily create efficient algorithms , ready for a rich variety of applications to practical tasks . part of the underpinning for this development in wavelet analysis is operator theory . this will be presented in the lectures , and we will also point to a number of developments in operator theory which in turn derive from wavelet problems , but which are of independent interest in mathematics . some of the material will build on chapters in a new wavelet book , co - authored by the speaker and ola bratteli , see http://www.math.uiowa.edu/jorgen/0.1em . [ subsubsection ] [ subsubsection ] [ subsubsection ] [ subsubsection ] [ subsubsection ] [ subsubsection ] [ subsubsection ] l |
the system is composed of a set of elements interconnected forming a scale - free undirected random network .every node can present two states , active and inactive , and the dynamics that governs their state is similar to the standard sis model .nodes flip from one state to the other following a poisson process with probability densities for activation and for deactivation . in particular, a node flips from the active to the inactive state at a constant rate , while its activation rate is proportional to the number of active nodes to which it is connected , . in the latter sum , if node is active at time and otherwise ; similarly , if and only if nodes and are connected .we now show that is , after the system has reached the steady - state , approximately time - independent and proportional to the degree of the node ; we will refer to it as . to derive its expression, we first apply a coarse - graining approximation by assuming that all nodes with the same degree are equivalent and statistically independent . under this approximation ,the state of the system is characterized by the fraction of nodes in the active state in every degree class , where stands for the number of nodes with degree in the network .since we are assuming all nodes with the same degree to be statistically independent , yields the probability that a randomly chosen node with degree is active .hence , the probability that a randomly chosen link emerging from a node with degree reaches an active node can be computed as , i.e. , the probability that the link reaches a node with degree , , and that such node is active , , summed for all . in the case of uncorrelated random networks , , so the probability for a link to connect to an active node becomes degree independent , .moreover , in the steady - state it takes an almost constant value ( see fig . [fig : wprevalence ] ) , which allows us to compute the average number of active neighbors of a node with degree as .hence , in addition to the sis - like dynamics , we let nodes produce while they are in the active state with a probability density , which need not be exponential .it is therefore important to note that , if gives the probability for the time interval between two consecutive production events to fall into ( for a node that is active along all the process ) , the probability density for the interval between the activation time and the first production event is , in general , given by where is the average production interval and is the survival probability distribution ( the probability that the interval is greater than ) .we will use the same notation for the activation and deactivation probabilities as well : will refer to the survival probability of . to justify eq ., let us consider a sequence of events distributed with probability density ( so the whole process takes a time ) and calculate , i.e. , the probability that an activation event falls at a distance between and from the next production event . assuming that the activation event is uniformly distributed along the interval , the probability for it to fall in any interval of length is simply . in our case , , where is the number of intervals greater than , since there is an interval of length placed at distance from the next production event in every production interval greater than .therefore , the latter section we have reduced the study of the system to that of a single node that activates with rate given by eq . , deactivates with rate and produces in the active state with probability densities and .we now compute the probability that two consecutive production events take place in the interval taking into account the fact that the state of the node can flip any number of times between the two events ( as long as it is in the active state during both productions ) .it is easy to see that if it does not flip to the inactive state during the process , is given by ( the probability that it does not deactivate in a time less than and that it produces in the given interval ) .similarly , we can compute the probability that it deactivates only once between both productions ; however , in this case we have to integrate for all possible deactivation and activation times ( let us call them and , respectively ) , yielding . following the same procedure ,we can write the total probability density as the series we do not need to solve the series , since we are only interested in the first two moments of the distribution to obtain . to this end , we make use of the laplace transform formalism , which is particularly convenient for two reasons . on the one hand ,the -th moment of a distribution can be obtained as the -th derivative of its transform , . on the other hand ,the transform of a convolution is . since any term in eq .is a chain of convolutions , it is not hard to write down , where in the last step we have made use of the fact that .we can write the latter result in terms of only by plugging the expressions for and into it and using and the final expression for reads ^{2}}{\frac{s + \lambda_{eff}(k)}{\lambda_{eff}(k ) } - \frac{\lambda_{i}}{s + \lambda_{i } } \left [ 1 - \frac{1 - \hat{\psi } ( s + \lambda_{i})}{\langle t_ { p}\rangle \left ( s + \lambda_{i } \right ) } \right]}.\ ] ] up to this point , we have solved the general problem for an unspecified production probability density . in this paperwe have considered two different cases , which will be studied separately in two subsections . in the first stage of this work ,we have considered a weibull probability density function with scale parameter and shape parameter , whose intrinsic burstiness coefficient is .plugging its laplace transform into eq .yields a rather cumbersome expression , ^{2}}{4 s \left ( s + \lambda_{i } \right ) \left ( s + \lambda_{i } + \lambda_{eff}(k ) \right ) + 4 \lambda_{i } \lambda_{eff}(k ) \beta - 2 \lambda_{i } \lambda_{eff}(k ) \beta e^{\frac{\beta}{2 \left ( s + \lambda_{i } \right ) } } \sqrt{\frac{2 \pi \beta}{s + \lambda_{i } } } \text{erfc } \left ( \sqrt{\frac{\beta}{2 \left ( s + \lambda_{i } \right ) } } \right)}. \end{aligned}\ ] ] we can now obtain the first and second moments of deriving the latter equation and therefore write the expressions for the mean , the standard deviation and in terms of the normalized parameters and : hence , we can repeat the same procedure as in the latter subsection for an exponential production probability density , whose intrinsic burstiness coefficient is .its laplace transform is given by so eq . becomes evaluating its first and second derivatives at and using normalized parameters and gives and us focus on the case of a single node that produces with an arbitrary and consider several limiting cases .in order to define a timescale , we set throughout all this section without loss of generality ( so the ratios and to which we will refer become simply and ) .in the limit of , the fraction of production intervals interrupted by a deactivation goes to zero , so the resulting probability density function .this result can also be derived by taking the limit of eq .as , since .notice , however , that this limit is slightly different to the limit and fixed ( which gives ) ; the reason is that , in the present case , when taking the limit for a fixed , the ratio obviously remains constant , i.e. the interrupted intervals become less frequent but not longer than uninterrupted ones . on the other hand , taking the limit for a fixed gives .therefore , even though interrupted intervals become less frequent , they become much longer than uninterrupted intervals as well , hence yielding a bursty behavior .that is why , when working with the normalized parameters and , must be increased too in order to recover the original burstiness coefficient ( as shown in fig . [fig : theo_reduction1 ] ) . for intermediate values of the deactivation rate , the fraction of production events interrupted by a deactivation may be significant ; there is , on average , one production event per unit time during the active state , while there are interruptions per unit time .therefore , approximately one out of every production intervals is interrupted .in addition , the average time the node remains in the inactive state is , so if , production events buffer during the active state according to and are followed by a long period of inactivity .hence , the resulting distribution becomes burstier than the original . in the opposite case ( ) ,the production process becomes poisson regardless of ; since , the probability that the node produces during an active period ( which we will denote by ) is very small , and we can regard the process as a succession of active and inactive states of average length , each of which with an independent production probability . under this approximation , we can easily compute the probability that two consecutive production events take place in a time interval greater than as the probability for the node not to produce in any of intervals , furthermore , can be estimated as well for any .its exact expression is given by since we are considering the limit of , the probability that a production interval is greater than is approximately one , so in the range of small values of in which the exponential term in the integrand is not close to zero .hence , we can approximate eq .as we therefore see that the production process becomes exponentially distributed with mean which is in accordance with eqs .( [ eq : mu_reg],[eq : mu_ind ] ) .this result can also be obtained by taking the limit of as assuming and from all the stated above , we can conclude that the network dynamics can be used to control the production of nodes between three regimes of special interest : the endogenous production behavior , a bursty process and a poisson process ( see table [ tab : behaviours ] ) .furthermore , these results are completely general and independent of the functional form of ( as long as eq. holds ) ..[tab : behaviours]different behaviors exhibited by a node following the dynamics presented in the paper . [ cols="^,^,^",options="header " , ]the iai dynamics is simulated using the continuous - time gillespie algorithm .in addition , at every deactivation event the deactivated node s production is updated by generating random intervals with distributions for the first one and for the rest of them .the initial condition for all simulations is ( all nodes active ) , but we only measure nodes for simulation times , since the system reaches a stationary state before that value ( see fig . [fig : wprevalence ] ) .we stop simulations when all nodes have produced at least times , so enough statistics are guaranteed . in figs . 2 and 3 in the paper ,the simulation parameters have been chosen in such a way that a wide heterogeneity of behaviours is exhibited ; in figs .2a and 3b , every simulation corresponds to a different value of between and , so takes values approximately between and .since the degree of nodes is between and , is approximately in the intervals for and for . from fig .1 it can be seen that for , lies about the middle of the interval in which can be varied ; we have set .a similar criterion has been applied when setting .likewise , in figs .2b and 3c is fixed , so and hence .given that and are varied over 4 orders of magnitude , a wide range of values is obtained .in this section we present some results not included in the paper . in fig .[ fig : wprevalence ] we show the quantity derived in section [ sec : dynamics ] .as expected , this quantity fluctuates around a constant value that can be easily measured from simulations . to see how the network topology affects the induction of burstiness , we have measured for different networks . in fig .[ fig : heterogeneity ] we show the results for four sf networks with , and , but different exponents . in fig . [fig : assortativity ] , the results correspond to four sf random graphs with the same degree distribution but different assortativities .our results show a weak dependence on degree heterogeneity ; indeed , the higher the heterogeneity , the faster the decay of with , although the effect is not really significant .similarly , degree correlations slightly affect the burstiness of high degree nodes , since it is greater for disassortative networks . yet , assortativity does not play an important role at burstiness induction either . for sf graphs with different assortativity .top - left : .top - right : .bottom - left : .bottom - right : .,title="fig:",scaledwidth=45.0% ] for sf graphs with different assortativity .top - left : .top - right : .bottom - left : .bottom - right : .,title="fig:",scaledwidth=45.0% ] for sf graphs with different assortativity .top - left : .top - right : .bottom - left : .bottom - right : .,title="fig:",scaledwidth=45.0% ] for sf graphs with different assortativity .top - left : .top - right : .bottom - left : .bottom - right : .,title="fig:",scaledwidth=45.0% ] for sf graphs with different assortativity .top - left : .top - right : .bottom - left : .bottom - right : .,title="fig:",scaledwidth=45.0% ] for sf graphs with different assortativity .top - left : .top - right : .bottom - left : .bottom - right : .,title="fig:",scaledwidth=45.0% ] for sf graphs with different assortativity .top - left : .top - right : .bottom - left : .bottom - right : .,title="fig:",scaledwidth=45.0% ] for sf graphs with different assortativity .top - left : .top - right : .bottom - left : .bottom - right : .,title="fig:",scaledwidth=45.0% ] since the burstiness coefficient alone does not provide any details about the distribution other than the ratio between and , we have measured the cumulative inter - event time distribution function for the burstiest node in the network , as shown in fig .[ fig : burstiest ] ( left ) .we have also measured the burstiness of the same distribution by computing the normalized conditional average interval as a function of ( see fig . [fig : burstiest ] , right ) . for a poisson process, the conditional average interval is independent of and equal to , while for a regular - like distribution , this should be a decreasing function ( the more time elapsed since the last event , the less time is expected until the next one ) . in our case , we observe exactly the opposite situation , which evidences the counter - intuitive behaviour of bursty distributions ; the more time elapsed since the last event , the more time expected until the next one .we thus see that the network clearly induces burstiness on the activity of nodes .left : cumulative inter - event time distribution function of the node with highest in the network ( for ) .right : normalized conditional average interval as a function of .we see that it clearly increases above the constant value of 1 expected for a non - bursty poisson process.,title="fig : " ] left : cumulative inter - event time distribution function of the node with highest in the network ( for ) . right :normalized conditional average interval as a function of .we see that it clearly increases above the constant value of 1 expected for a non - bursty poisson process.,title="fig : " ] | we prove that complex networks of interactions have the capacity to regulate and buffer unpredictable fluctuations in production events . we show that non - bursty network - driven activation dynamics can effectively regulate the level of burstiness in the production of nodes , which can be enhanced or reduced . burstiness can be induced even when the endogenous inter - event time distribution of nodes production is non - bursty . we found that hubs tend to be less controllable than low degree nodes , which are more susceptible to the networked regulatory effects . our results have important implications for the analysis and engineering of bursty activity in a range of systems , from telecommunication networks to transcription and translation of genes into proteins in cells . while human perception tends to appreciate regularity in the happening of events , evidence reveals that in many systems events cluster in bursts , _ i.e. _ , accumulations of a large number of rapidly occurring events during short time intervals separated by silent periods . bursts have been experimentally and empirically observed in profusion and many models have been proposed to explain and generate them . for example , traces of human activity are to a great extent imprinted with burstiness . in particular , bursts and heavy - tails pervade the diversity of human communication channels , from written text , to the frequency of letter and e - mail correspondence , and cell phone calls . in a different context , key processes in single - cell biology , such as transcription and translation , are particularly prone to progress in bursts . for instance , mrna synthesis proceeds in short but intense outbreaks beginning when coding genes transition from an inactive to an active state . these bursts have been suggested to be able to affect the expression of essential genes and even the switching and rhythms of cellular states . this may raise questions about how cells are able to maintain function in the face of such unpredictable fluctuations explained by different single gene burst generation mechanisms , with the hypothesis that those bursts could be buffered by local or bulk degradation mechanisms . it is interesting to explore whether large - scale system mechanisms , like the interconnection of genes in regulatory networks of interactions , have the capacity to produce similar buffering effects . in this work , we show indeed that endogenous burstiness in the production of nodes can be both enhanced or reduced by stochastic network - driven non - bursty interactions , which activate / inactivate nodes dynamically . we observe a strong anti - correlation of these regulation phenomena with node degree , such that the burstiness of lower degree nodes is always more susceptible to being raised by dynamic network influence , while hubs tend to experience a decreased burstiness in their endogenous production , which helps to maintain a stable bulk level . conversely , the production sequence of individual nodes of all degrees can present bursts even when their endogenous production pattern follows a non - bursty inter - event time probability distribution , and we observe again a strong anti - correlation of the induced burstiness with node degree . note that here we are reversing the topical question about the macroscopic collective effects of burstiness on dynamical processes running on complex networks . the bursty activity of nodes , both in models of opinion formation and epidemic spreading , has been observed to induce a slowing down of the progression of the dynamical process . here , we focus instead on the regulatory effects of networked dynamical interactions on the endogenous bursty activity of individual nodes . we consider nodes interconnected forming a complex network that can present two states , active and inactive . during the active state , each node produces events according to a certain endogenous inter - event time probability distribution function . the activation dynamics of nodes is governed by a propagating process in the network . we chose an inactive - active - inactive ( iai ) dynamical process akin to the standard susceptible - infected - susceptible epidemic spreading model , so that active nodes inactivate at rate and inactive ones become active upon contact with active ones at rate per active contact . in all cases , these processes are assumed to follow poisson statistics . an endemic state of active nodes can be sustained whenever the ratio is above a critical value that , in general , depends on network topology . once at steady state , the effective activation rate of a node depends on the topological configuration of the network , with a temporal average value on uncorrelated networks given by here is the node degree and is a temporal average of a degree - weighted prevalence depending on the degree distribution of nodes in the network , the average degree , and the fraction of nodes with degree in the active state . this temporal average fluctuates around a constant value in the stationary state of the endemic phase ( see appendix [ sec : dynamics ] ) , so that is proportional to degree . this result allows us to focus the problem on the study of a single node that changes state following poisson processes with rates for activation and for inactivation and that , when active , produces events according to a general distribution with average . for such a node , and any , we calculated analytically the effective inter - event time probability density function of production events ( see appendix [ sec : phi ] ) . here , we report the result in the laplace space , ^{2}}{\left < t_p \right > \left ( s + \lambda_{i } \right)^{2 } } } { \frac{s + \lambda_{eff}(k)}{\lambda_{eff}(k ) \lambda_{i } } - \frac{\left [ 1 - \frac{1 - \hat{\psi } ( s + \lambda_{i})}{\left < t_p \right > \left ( s + \lambda_{i } \right ) } \right]}{s + \lambda_{i}}}\ ] ] where we have used that the probability density for the interval between the activation time and the first production event is given by , where is the survival function . the rationale for this choice is that the activation event can be regarded as taking place between two consecutive production events . since we do not have any other information , we assume that the time interval between those two events is greater than the time elapsed between the activation event and the first production . as a consequence , its probability density must be proportional to , and the denominator simply normalises the distribution . hence , effective inter - event times of production events are ruled by the interplay of three key factors : the endogenous production statistics , the first production event statistics , and the activation dynamics through the effective activation rate that depends linearly on a node s degree . to measure the burstiness of production , we use the burstiness coefficient as defined in , where is the coefficient of variation and and are the mean and standard deviation of the time between consecutive production events . with this definition , corresponds to a strongly bursty production , to a neutral one following a poisson statistics , and to a periodic signal . as an example , we now particularize to the case of a weibull distribution for production events with scale parameter controlling the spread of the distribution , and shape parameter implying a heterogeneous non - poisonian distribution such that all nodes have an endogenous production of events that clusterize in time . the endogenous burstiness for a node continuously producing according to eq . is , independent of . the effective value of the burstiness of a node with interrupted production due to the network - driven activation / inactivation dynamics depends on its degree , . to compute it , we need to calculate the degree - dependent average and standard deviation of production inter - event times , which can be easily evaluated through simple derivatives of eq . ( [ eq : phi ] ) evaluated at ( see appendix [ sec : phi_weibull ] ) . the degree dependent coefficient of variation reads where we have redefined and . as a validation , we display in fig . [ fig : theo_reduction1]a the analytical result for based on eq . along with the simulation of a single node whose state changes following poisson processes with rates for activation and for inactivation , and that intrinsically produces events according to eq . with rate . the agreement between the analytical surface and the simulation points is excellent . both results prove that the level of endogenous burstiness for continuous production can be both increased and decreased , as shown also in the projection of the analytical surface on the plane in fig . [ fig : theo_reduction1]b . the effective burstiness ranges always between zero burstiness ( exponential inter - event time distribution ) in the limit and the maximum of when . for high values of , is dominated by the interplay of the endogenous production statistics and the first production event statistics . the range of values associated to effective burstiness around widens with increasing and the endogenous production level is recovered only when both and are increased simultaneously . in contrast , the effective burstiness raises for low values of due to the effect of increased inactivation periods , such that for each the level of effective burstiness increases with . these results are qualitatively the same for any distribution ( see appendix [ sec : limits ] for a detailed analysis ) . * regulation of burstiness . * * a*. comparison between simulation data ( dots ) and the analytical calculation ( solid surface and line ) of the effective burstiness , in the blue region and in the green one . * b*. projection of burstiness surface in * a * in the plane . colors in the contour plot code for different values of . ] in line with these arguments , and from the proportional dependence of with degree eq . , one can expect that the effective burstiness of higher degree nodes is reduced while lower degree nodes can be modulated . to confirm this , we measured from simulations on a network with nodes ( simulation details in appendix [ sec : simulation ] ) . although our results are valid in the variety of uncorrelated network topologies , we used a scale - free network with characteristic exponent . in fig . [ fig : theo_reduction2]a , we show simulation results of for different values of . for all activation rates , a strong inverse dependence of effective burstiness on degree can indeed be observed . for high degree nodes , is noticeably below . this is due to the fact that hubs do not remain in the inactive state for much time since they are usually connected to many active nodes that constantly reactivate them . on the other hand , still allows to control the duration of inactive periods for low degree nodes . for low values of ( but above the minimum required to sustain the activity ) , low degree nodes remain a long time in the inactive state , which raises their effective burstiness well above , while for high values of they behave as high degree nodes approaching a minimum value of independent of , a basal level below . therefore , we conclude that the bursty production of nodes can be regulated by network - driven stochastic activation dynamics that splits the originally uniform endogenous burstiness in a range of values anti - correlated with degree . fixing , the production shape parameter can be varied to regulate the effective level of burstiness , fig . [ fig : theo_reduction2]b , with similar qualitative behavior . in both cases , the disparity of effective values reaches a maximum at some intermediate parameter levels . however , does not have any limitation to be increased ( in our computational experiments it is varied over four orders of magnitude ) so that high - degree nodes can increase their effective burstiness above the endogenous level . * network - driven regulation of burstiness . * * a*. for different values of the activation rate and fixed , chosen so that a wide heterogeneity of behaviors can be observed and lies about the middle of the interval in which can be varied . * b*. for fixed and different values of the production scale parameter . ] finally , we also prove that network - driven activation can induce burstiness even when the endogenous production of nodes is not bursty . we let nodes in the active state to produce events with an exponential inter - event time probability distribution function of rate , , meaning that the endogenous production of nodes is not bursty , . again , calculating the first two moments of eq . ( see appendix [ sec : phi_poisson ] ) , we obtain where and are the original values rescaled by . this analytical expression matches almost perfectly with numerical simulations of a single node whose state changes following poisson processes with rates for activation and for inactivation , and that intrinsically produces events according to an exponential inter - event time probability distribution with rate , fig . [ fig : theo_induction]a . for an exponential distribution , so that has no role as expected for a poisson process . hence , the effective burstiness can only be equal or above the endogenous value . this induced burstiness is explained as a consequence of the fact that the node may remain in the inactive state for a period of time considerably longer than the average endogenous inter - event time . * network - driven induction of burstiness . * in all graphs , solid surface and lines correspond to analytical results and dots to simulations . * a * comparison between simulation data and the analytical calculation of the effective burstiness , . * b*. for fixed and different values of the activation rate . * c*. for fixed and different values of the production rate . ] we also simulated exponential production in the same scale - free network with nodes and characteristic exponent used in the regulation study . we used different values of with , fig . [ fig : theo_induction]b , and different values of over four orders of magnitude with , fig . [ fig : theo_induction]c . as for bursty production , we find again that is always a decreasing function of ( but always above ) . due to their sustained activity , hubs are less susceptible to display induced burstiness production . in contrast , nodes with low degree stay in the inactive state for longer periods and are more prone to present a raised effective burstiness . qualitatively , both parameters and can control similarly the induction of bustiness , affecting all nodes more efficiently for higher values of and lower values of . again , the disparity of induced burstiness values reaches a maximum at some intermediate parameter levels . different dynamical processes occurring on a network can interplay in unexpected ways . we have seen here that the time series of production events of interacting nodes are affected by network - driven activation / inactivation dynamics . in a broad range of parameters , hubs in a network tend to decrease burstiness in their endogenous pattern of production , hence helping to maintain a stable bulk level , at the expenses of low degree nodes being more pliable to the regulatory effect of the network that tends to amplify their fluctuations . at the same time , lower degree nodes are more prone to burstiness induction . these results indicate that a heterogenous network structure protects the functioning of some nodes , the hubs , making low degree nodes better targets for engineering actions to produce local modifications of production without critically affecting the behavior of the whole system . taken together , our findings suggest that heterogeneous network interconnectivity may be a strategy in itself developed to protect complex systems against unpredictable functional fluctuations . however , further research should be conducted to determine the effects of different activation / inactivation dynamics on node s burstiness . this work was supported by a james s. mcdonnell foundation scholar award in complex systems ; the icrea academia prize , funded by the _ generalitat de catalunya _ ; mineco projects no . fis2013 - 47282-c2 - 1 , fis2010 - 21781-c02 - 02 , and bfu2010 - 21847-c02 - 02 ; _ generalitat de catalunya _ grant no . 2014sgr608 ; micinn ramn y cajal program , and ec fet - proactive project multiplex ( grant 317532 ) . |
in , dijkstra presents the following mutual exclusion protocol for a ring of nodes where each node can read the state \in \set{0}{k-1} ] =x[n] ] node , , privileged when \neq x[i-1] ] \assign x[i-1] ] for all with and =(a-1 ) \bmod k ] for some and the new value of ] . in order for node to change value from to ,it must have copied from its anti - clockwise neighbour ] .but then , just after node copies from node we actually have =x[n]=x[0]=b+1 ] becomes . because the other nodes merely copy values from their anti - clockwise neighbours , at this point no other node holds .the next time node fires , =x[0]=a ] , which is a legitimate state . | we show that , contrary to common belief , dijkstra s self - stabilizing mutual exclusion algorithm on a ring also stabilizes when the number of states per node is one less than the number of nodes on the ring . * keywords * : distributed computing , fault tolerance , self - stabilization . |
the study of synchronization of coupled oscillators has attracted the attention of widely diverse research disciplines such as neuroscience , physics , mathematics and engineering . since the seminal works of winfree and kuramoto , the kuramoto model has served as a canonical model for synchronization that can capture a quite rich dynamic behavior including multiple equilibria , limit cycles , and even chaos .there are two main properties that characterize its behavior .the first one is the coupling function , which is a trigonometric function in the case of the kuramoto model .however , a broader class of the coupling functions has also been studied .the second property , and perhaps the most important one , is the interconnection topology .the most popular assumption is that all oscillators are connected to each other , which corresponds to a fully connected graph .although a much more general approach is to study the systems of oscillators with arbitrary underlying topology . due to its complex behavior , several assumptions are usually made to make the study tractable .for example , one can make the number of oscillators go to infinity , and use statistical mechanics tools to characterize its convergence . or one can assume that all oscillators have equal intrinsic frequencies , and therefore form a ( gradient ) system of homogeneous oscillators that has globally convergent properties . alternatively , as we do in this article , one may let the frequencies take distinct values and characterize sufficient conditions for synchronization . in this paper , we consider a system of finite number of heterogeneous kuramoto oscillators with general underlying topology . we show that when certain conditions on the coupling strength and initial phases are satisfied , the trajectories are bounded , which for kuramoto oscillators also implies synchronization .the most relevant previous work is , where the authors studied the same setup . here, we build upon to obtain less restrictive conditions for synchronization .in particular , by using a novel computational - analytical approach , we are able to further exploit the particular features of each problem instance and outperform existing results .several examples are used to illustrate our findings and characterize the scaling behavior of our conditions .we consider a system of oscillators which are described by a kuramoto model , i.e. the behavior of each oscillator is governed by the following equation : where is a set of oscillators connected to oscillator , i.e. the set of its neighbors , is the coupling strength , which assumed to be the same for all connections , and is the total number of oscillators in the system .we also assume that the oscillators are heterogeneous in intrinsic frequencies .this means that their intrinsic frequencies are not necessary equal . frequencies , however , do not change their values with time , so each is a constant . in this articlewe study frequency synchronization of the system .the oscillators achieve synchronization if as , where is a constant common phase velocity .it is easy to show that this common phase velocity is an average sum of intrinsic frequencies of the oscillators .that is , indeed , when , adding up all the equations of gives : , because each is added to and gives zero .we denote the average natural frequency by , and define the deviations of the natural frequencies by + , where . from now on we will study the following system instead of system : each limit cycle of system is an equilibrium of .therefore , we will focus on finding conditions when system converges to an equilibrium , i.e. when . due to the rotational invariance of ,we can shift without loss of generality , all the initial phases by the same value so that their sum becomes equal to zero : .furthermore , since the phase average remains the same ( for system : ) , condition will be satisfied , where are the trajectories of system . in the rest of this articlewe will assume that the phase values sum up to zero at each time . in this articlewe show synchronization of oscillators by providing a lyapunov function and using lasalle s invariance theorem .when all the intrinsic frequencies are equal , i.e. deviations , the oscillators are called homogeneous , and the following lyapunov function can be employed : where and is the edge set of a given graph .it is easy to check that thus , since the function is -periodic on each element in , it is also well - defined on a n - dimensional torus ( ) , which is compact .thus , applying the lasalle s invariance theorem ( on ) guarantees synchronization of the oscillators .when the intrinsic frequencies are not equal , we have a system of heterogeneous oscillators , and we still can provide a potential function for this case : we can check again that the time derivative of this function is also non - positive and equal to zero only at an equilibrium , i.e. when the frequencies are synchronized .the problem here is that function is not bounded from below and can not be defined on the -dimensional torus , and therefore , we are not able to apply directly the lasalle s invariance theorem . however ,if we show that the trajectories of are bounded , then the function is bounded as well , and hence synchronization follows . therefore , the main goal of this study is to find the conditions that guarantee that trajectories are bounded .we will achieve this by finding a compact positively invariant set ( pis ) for the oscillators phases .that is , a compact ( closed and bounded ) set such that if the system s initial conditions are within this set , the trajectories will remain in the set .the next section shows that when some conditions are met , such pis exists , and therefore system will converge to the set of equilibria .this section is organized as follows .we first formulate in proposition 1 a general sufficient condition for boundedness of the trajectories that leads to synchronization of system .we then provide two solutions that guarantee fulfillment of proposition 1 .our first solution , described in subsection , contains explicit requirements on the coupling strength and initial oscillators phases .this solution is further refined using computational tools in subsection .we will denote maximum and minimum phase values at time by and , where is a phase of oscillator at time .let be defined as a maximum phase difference between two oscillators at time , i.e. then ( ) .in other words , each phase lies between the minimum and maximum phases and .the maximum initial ( at time ) pairwise phase difference is denoted by : if we can show that the maximum phase difference is always bounded , i.e. if , where is a constant satisfying , then the trajectories will be also bounded since the phase average remains the same .the pis therefore is defined through the maximum phase difference that is bounded by the value of , i.e. which is obviously a compact .we can now formulate a general condition that is sufficient to guarantee that the maximum phase difference is always bounded by a constant and thus the trajectories are also bounded .* proposition 1 * _ if is a constant satisfying , and for all times such that , the following condition is satisfied : for every two oscillators and such that and , then the maximum phase difference is bounded by , i.e. for all , trajectories of system are bounded , and system achieves frequency synchronization . _ condition says that when the maximum phase difference achieves value , it can not grow anymore and thus does not exceed .that s why the maximum phase difference will be always bounded by if is satisfied , and because the phase average remains the same , it also ensures that the trajectories of system are bounded in .since function is well - defined in , we can now apply lasalle s invariance theorem to guarantee that each solution of approaches the nonempty set , and system achieves frequency synchronization .it is possible that when , several oscillators have phase values equal to or . in this case conditionshould be satisfied for each pair of oscillators with a phase difference equal to .condition is very general by itself and difficult to check . in the rest of this sectionwe derive two conditions analytic and optimization - based that guarantee condition .these conditions contain requirements on the coupling strength and initial phases of oscillators that can be verified for each given system .we now introduce some additional notation .let i.e. is the squared euclidean norm of a vector of phases at time . for simplicity we will use symbol instead of . at initial time the value of this function is denoted by .the euclidean norm of a vector of the natural frequencies deviations is defined as : let the topology of a given system be defined by an undirected graph with a set of nodes such that , and with an edge set . by we denote the set where is the set of edges of a complete graph with nodes .we use to denote the minimum nodal degree of a graph .the following two lemmas are based on the results from and will be used in the next two subsections ; their proofs are provided in appendix .* lemma 1 * _ _ if , then where .distance between two nodes in a graph is the number of edges in a shortest path between these two nodes .* lemma 2 * _ if ] : and if in addition then will be upper bounded by : ] , and , and since , and .therefore , now , since , it is sufficient to show that .indeed , since , , which means that on the other hand , for the proof is similar . therefore , , and holds because of and .notice that even though it is possible that one of the functions or is negative , we demonstrated that their sum is always positive . finally , from using : thus , , if .when and condition is satisfied , will be always less than .indeed , if at time , then , and in contradiction to lemma 2 .therefore , we do not need to have an additional bound on to guarantee that . in theorem have two conditions and on the lower bound of the coupling strength , thus the theorem will hold when satisfies the largest of these two lower bounds . similarly to the analytic synchronization condition described in a previous subsection ,the optimization - based condition to be introduced in this subsection also guarantees that requirement of proposition 1 is satisfied .numerical techniques , however , allow us to improve the analytic synchronization condition .there are two bounds on the coupling strength in theorem 1 , and we will improve bound using optimization approach .our optimization approach utilizes additional information that has not been used in the analytic condition , for example , topology information has not been taken into account ( except for the minimum nodal degree ) . for each pair of verticeswe solve an optimization problem posed below and find the lower bound on .then we choose the maximum bound among these obtained lower bounds on the coupling strength .condition in proposition 1 is satisfied if we find the minimum possible value of the denominator and then obtain corresponding bound on by .the phases of oscillators constitute the phase vector ^{t} ] , then function satisfies the following differential inequality on ] ._ multiplying equation of by and summing them together : it can be verified that : .\ ] ] thus , using lemma 1 and the schwartz inequality : we now consider the following differential equation : this equation has one asymptotically stable equilibrium , and monotonically decreases to if condition is satisfied and therefore . by comparison principle , ,\ ] ] and thus $ ] .peter achermann and hanspeter kunz .modeling circadian rhythm generation in the suprachiasmatic nucleus with locally coupled self - sustained oscillators : phase shifts and phase response curves ._ journal of biological rhythms _ , 14(6):460 - 468 , 1999 .ll bonilla , cj perez vicente , and r spigler .time - periodic phases in populations of nonlinearly coupled oscillators with bimodal frequency distributions ._ physica d : nonlinear phenomena _ , 113(1):79 - 97 , 1998 .seung - yeal ha , zhuchun li , xiaoping xue , formation of phase - locked states in a population of locally interacting kuramoto oscillators , journal of differential equations , volume 255 , issue 10 , 15 november 2013 , pages 3053 - 3070 , issn 0022 - 0396 , http://dx.doi.org/10.1016/j.jde.2013.07.013 .y. kuramoto , self - entrainment of a population of coupled non - linear oscillators , _ in int .symposium on mathematical problems in theoretical physics _ , h. araki , ed .39 of lecture notes in physics , springer , 1975 , pp .420 - 422 .e. mallada and a. tang .`` distributed clock synchronization : joint frequency and phase consensus . '' in _ proceeding of the 50th ieee conference on decision and control , and european control conference _ , pages 6742 - 6747 .ieee , 2011 .p. monzn and f. paganini .global considerations on the kuramoto model of sinusoidally coupled oscillators . in _ proceedings of the 44th ieee conference on decision and control , and european control, pages 3923 - 3928 .ieee , 2005 .strogatz , from kuramoto to crawford : exploring the onset of synchronization in populations of coupled oscillators , physica d : nonlinear phenomena , volume 143 , issues 14 , 1september 2000 , pages 1 - 20 , issn 0167 - 2789 . | we study synchronization of coupled kuramoto oscillators with heterogeneous inherent frequencies and general underlying connectivity . we provide conditions on the coupling strength and the initial phases which guarantee the existence of a positively invariant set ( pis ) and lead to synchronization . unlike previous works that focus only on analytical bounds , here we introduce an optimization approach to provide a computational - analytical bound that can further exploit the particular features of each individual system such as topology and frequency distribution . examples are provided to illustrate our results as well as the improvement over previous existing bounds . |
we consider a generalized eigenvalue problem ( eigenproblem ) for a linear pencil with symmetric ( hermitian in the complex case ) matrices and with positive definite .the eigenvalues are enumerated in decreasing order and the denote the corresponding eigenvectors .the largest value of the rayleigh quotient where denotes the standard scalar product , is the largest eigenvalue .it can be approximated iteratively by maximizing the rayleigh quotient in the direction of its gradient , which is proportional to .preconditioning is used to accelerate the convergence ; see , e.g. , and the references therein . herewe consider the simplest preconditioned eigenvalue solver ( eigensolver)the gradient iterative method with an explicit formula for the step size , cf . , one step of which is described by the symmetric ( hermitian in the complex case ) positive definite matrix in ( [ e.preeig ] ) is called the _preconditioner_. since and are both positive definite , we assume that the following result is proved in for symmetric matrices in the real space . [ t.1 ] if then and the convergence factor can not be improved with the chosen terms and assumptions .compared to other known non - asymptotic convergence rate bounds for similar preconditioned eigensolvers , e.g. , , the advantages of are in its sharpness and elegance .method is the easiest preconditioned eigensolver , but still remains the only known sharp bound in these terms for any of preconditioned eigensolvers .while bound is short and simple , its proof in is quite the opposite .it covers only the real case and is not self - contained in addition it requires most of the material from .here we extend the bound to hermitian matrices and give a new much shorter and self - contained proof of theorem [ t.1 ] , which is a great qualitative improvement compared to that of .the new proof is not yet as elementary as we would like it to be ; however , it is easy enough to hope that a similar approach might be applicable in future work on preconditioned eigensolvers .our new proof is based on novel techniques combined with some old ideas of .we demonstrate that , for a given initial eigenvector approximation , the next iterative approximation described by belongs to a cone if we apply any preconditioner satisfying . we analyze a corresponding continuation gradient method involving the gradient flow of the rayleigh quotient andshow that the smallest gradient norm ( evidently leading to the slowest convergence ) of the continuation method is reached when the initial vector belongs to a subspace spanned by two specific eigenvectors , namely and .this is done by showing that temple s inequality , which provides a lower bound for the norm of the gradient , is sharp only in .next , we extend by integration the result for the continuation gradient method to our actual fixed step gradient method to conclude that the point on the cone , which corresponds to the poorest convergence and thus gives the guaranteed convergence rate bound , belongs to the same two - dimensional invariant subspace .this reduces the convergence analysis to a two - dimensional case for shifted inverse iterations , where the sharp convergence rate bound is established .we start with several simplifications : [ t.simp ] we can assume that , , is diagonal , eigenvalues are simple , , and in theorem [ t.1 ] without loss of generality . first , we observe that method and bound are evidently both invariant with respect to a real shift if we replace the matrix with , so without loss of generality we need only consider the case which makes second , by changing the basis from coordinate vectors to the eigenvectors of we can make diagonal and .third , having if or , or both , bound becomes trivial .the assumption is a bit more delicate .the vector depends continuously on the preconditioner , so we can assume that and extend the final bound to the case by continuity. finally , we again use continuity to explain why we can assume that all eigenvalues ( in fact , we only need and ) are simple and make and thus without changing anything .let us list all -dependent terms , in addition to all participating eigenvalues , in method ( [ e.ep ] ) : and ; and in bound ( [ e.muest ] ) : and .all these terms depend on continuously if is slightly perturbed into with some , so we increase arbitrarily small the diagonal entries of the matrix to make all eigenvalues of simple and .if we prove bound for the matrix with simple positive eigenvalues , and show that the bound is sharp as with , we take the limit and by continuity extend the result to the limit matrix with and possibly multiple eigenvalues .it is convenient to rewrite equivalently by theorem [ t.simp ] as follows denotes the euclidean vector norm , i.e. , for a real or complex column - vector , as well as the corresponding induced matrix norm . ] and if and then and now we establish the validity and sharpness of bound assuming and .[ t.cone ] let us define ] but only eigenvectors can be special points of ode .the condition thus uniquely determines for a given initial value .the absolute value of the decrease of the rayleigh quotient along the path is our continuation method using the _ normalized _ gradient flow is nonstandard , but its advantage is that it gives the following simple expression for the length of , since the initial value is determined by , we compare a generic with the special choice , using the superscript to denote all quantities corresponding to the choice . by theorem [ t.wcurve ] implies , so we have as is an invariant subspace for the gradient of the rayleigh quotient . at the end points , by their definition .our goal is to bound the initial value by , so we compare the lengths of the corresponding paths and and the norms of the gradients along these paths .we start with the lengths .we obtain by theorem [ t.2dgrad ] . herethe angle is the smallest angle between any two vectors on the cones boundaries and .thus , as our one vector by theorem [ t.wcurve ] , while the other vector can not be inside the cone since by theorem [ t.cone ] . as is a unit vector , as the angle is the length of the arc the shortest curve from to on the unit ball . for our special -choice , inequalities from the previous paragraph turn into equalities , as is in the intersection of the unit ball and the subspace ,so the path is the arc between to itself . combining everything together , by theorem [ t.2dgrad ] on the norms of the gradient , for each pair of independent variables and such that using theorem [ t : iif ], we conclude that as , i.e. , the subspace gives the smallest value by theorem [ t.2dred ] the poorest convergence is attained with and with the corresponding minimizer described in theorem [ t.wcurve ] , so finally our analysis is now reduced to the two - dimensional space .[ t.2d ] bound holds and is sharp for . assuming and , we derive and similarly for where . since , we have . assuming , this identity implies which contradicts our assumption that is not an eigenvector . for and by theorem [ t.wcurve ] , the inverse exists .next we prove that and that it is a strictly decreasing function of indeed , using and our cosine - based definition of the angles , we have where we substitute , which gives using , multiplication by leads to a simple quadratic equation , for as , , and , the discriminant is positive and the two solutions for , corresponding to the minimum and maximum of the rayleigh quotient on , have different signs . the proof of theorem [ t.wcurve ]analyzes the direction of the gradient of the rayleigh quotient to conclude that and correspond to the minimum . repeating the same arguments with that corresponds to the maximum . but since , hence the negative corresponds to the maximum and thus the positive corresponds to the minimum .we observe that the coefficients and are evidently increasing functions of , while does not depend on .thus is strictly decreasing in , and taking gives the smallest since where now , condition implies and implies so we introduce the convergence factor as where we use and again .we notice that is a strictly decreasing function of and thus takes its largest value for giving i.e. , bound that we are seeking .the convergence factor can not be improved without introducing extra terms or assumptions .but deals with , not with the actual iterate .we now show that for there exist a vector and a preconditioner satisfying such that and in both real and complex cases . in the complex case , let us choose such that and according to , then the real vector is a minimizer of the rayleigh quotient on , since and . finally , for a real with and a real properly scaled is a real matrix satisfying such that , which leads to with indeed , for the chosen we scale such that so . as vectors and are real and have the same length there exists a _real _ householder reflection such that .setting we obtain the required identity .any householder reflection is symmetric and has only two distinct eigenvalues , so we conclude that is real symmetric ( and thus hermitian in the complex case ) and satisfies .the integration of inverse functions theorem follows . [t : iif ] let \to{\bf r} ] we have .if for all ] we have for any $ ] we have ( using ) if , then for the derivatives of the inverse functions it holds that since and are strictly monotone increasing functions the integrands are positive functions and as well as . comparing the lower limits of the integrals gives the statement of the theorem .we present a new geometric approach to the convergence analysis of a preconditioned fixed - step gradient eigensolver which reduces the derivation of the convergence rate bound to a two - dimensional case .the main novelty is in the use of a continuation method for the gradient flow of the rayleigh quotient to locate the two - dimensional subspace corresponding to the smallest change in the rayleigh quotient and thus to the slowest convergence of the gradient eigensolver .an elegant and important result such as theorem [ t.1 ] should ideally have a textbook - level proof .we have been trying , unsuccessfully , to find such a proof for several years , so its existence remains an open problem .we thank m. zhou of university of rostock , germany for proofreading .m. argentati of university of colorado denver , e. ovtchinnikov of university of westminster , and anonymous referees have made numerous great suggestions to improve the paper and for future work .d e. g. dyakonov , _ optimization in solving elliptic problems _ , crc press , 1996 .kny1986 a. v. knyazev , _ computation of eigenvalues and eigenvectors for mesh problems : algorithms and error estimates _ ,( in russian ) , deptmoscow , 1986 .k99 a. v. knyazev , _ preconditioned eigensolvers : practical algorithms _ , in z. bai , j. demmel , j. dongarra , a. ruhe , and h. van der vorst , editors , templates for the solution of algebraic eigenvalue problems : a practical guide , pp .siam , philadelphia , 2000 .k00 a. v. knyazev , _ toward the optimal preconditioned eigensolver : locally optimal block preconditioned conjugate gradient method _ , siam j. sci .comput . , 23 ( 2001 ) , pp. 517541 .knn2003 a. v. knyazev and k. neymeyr , _ a geometric theory for preconditioned inverse iteration .iii : a short and sharp convergence estimate for generalized eigenvalue problems _ , linear algebra appl ., 358 ( 2003 ) , pp . 95114 .ney2001a k. neymeyr , _ a geometric theory for preconditioned inverse iteration .i : extrema of the rayleigh quotient _ , linear algebra appl ., 322 ( 2001 ) , pp .ney2001b k. neymeyr , _ a geometric theory for preconditioned inverse iteration .ii : convergence estimates _ , linear algebra appl ., 322 ( 2001 ) , pp . 87104 . | preconditioned eigenvalue solvers ( eigensolvers ) are gaining popularity , but their convergence theory remains sparse and complex . we consider the simplest preconditioned eigensolver the gradient iterative method with a fixed step size for symmetric generalized eigenvalue problems , where we use the gradient of the rayleigh quotient as an optimization direction . a sharp convergence rate bound for this method has been obtained in 20012003 . it still remains the only known such bound for any of the methods in this class . while the bound is short and simple , its proof is not . we extend the bound to hermitian matrices in the complex space and present a new self - contained and significantly shorter proof using novel geometric ideas . iterative method ; continuation method ; preconditioning ; preconditioner ; eigenvalue ; eigenvector ; rayleigh quotient ; gradient iteration ; convergence theory ; spectral equivalence 49m37 65f15 65k10 65n25 ( place for digital object identifier , to get an idea of the final spacing . ) |
kernel methods play an important role in solving various machine learning problems , such as non - linear regression and classification tasks .the kernel methods rely on the so called `` kernel trick '' , which transforms a given non - linear problem into a linear one by using a kernel function , which is a similarity function defined over pairs of input data points .this way , the input data is mapped into a high dimensional ( or even infinite - dimensional ) feature space , where the inner product can be calculated with a positive definite kernel function ( that is satisfying mercer s condition) : therefore , the mapping into the high dimensional feature space is done implicitly , without the need to explicitly map the data points . also , assuming that is the training data , then using the representer theorem any non - linear function can be expressed as a linear combination of kernel products evaluated on the training data points : the main methods for the kernel classification problems are the support vector machines ( svm) and the least - squares support vector machines ( ls - svm) . in this paperwe focus on the ls - svm classifier , where the main difficulty is the training complexity , where is the size of the training data set . because of this high complexity, the ls - svm method is not a suitable candidate for applications with large data sets . here , we discuss several approximation methods using randomized block kernel matrices , that significantly reduce the complexity of the problem .the proposed methods are based on the nystrm , kaczmarz and matching pursuit algorithms , and we show that they provide good accuracy and reliable scaling to relatively large multi - class classification problems .in the binary classification setting the kernel svm method constructs an optimal separating hyperplane ( with the maximal margin ) between the two classes in the feature space .the training problem is represented as a ( convex ) quadratic programming problem involving inequality constraints , which has a global and unique solution .the kernel ls - svm method simplifies the optimization problem by considering equality constraints only , such that the solution is obtained by solving a system of linear equations . with these modifications ,the problem is equivalent to a ridge regression formulation with binary targets .also , the kernel ls - svm allows us to treat the multi - class classification task in a much simple and compact way .more exactly , we assume that classes are encoded using the standard basis in the space . therefore ,if is an input in class , then the output is encoded by a binary row vector with 1 in the position and 0 in all other positions : thus , for the input data and the feature mapping function we consider the following optimization problem: such that : where is the bias coefficient , is the vector of weights corresponding to the class , and is the approximation error .the objective function is a sum of the least - squares error and a regularization term , and therefore it corresponds to a multi - dimensional ridge regression problem with the regularization parameter . in the primal weight spacethe multi - class classifier takes the form : where is the nonlinear softmax function : because may be infinite dimensional we seek a solution in the dual space by introducing the lagrangian : ,\ ] ] with the lagrange multipliers .the optimality conditions for the minimization problem are : by eliminating and we obtain : a_{nj } + b_j = y_{ij } , \quad i=1,\dots , n,\ ; j=1,\dots , k,\ ] ] where we applied the condition , and is kronecker s delta : if , and otherwise . therefore , in the dual space the multi - class classifier takes the form : one can see that the above system of equations ( 13 ) is equivalent to independent systems of equations with binary targets .each system can be written in the following equivalent form : where is the identity matrix , ^t ] and ^t$ ] are the weight and respectively the target column vectors for the class .each system has linear equations with unknowns , and requires the inversion of the same matrix : the above systems can be written in a compact matrix form as following : where and are matrices with the columns : despite these nice mathematical properties , the complexity of the problem is , and it is implied by the matrix inversion .our goal is to reduce the complexity of the kernel ls - svm classification method by using iterative approximations based on randomized block kernel matrices .more exactly we use the well known nystrm method, and we introduce new methods based on the kaczmarz and the matching pursuit algorithms . the nystrm method is probably the most popular low rank approximation of the matrix . since the kernel matrix is a positive definite symmetric matrix we can find its eigenvalue decomposition : where is the diagonal matrix of the eigenvalues , and is the corresponding matrix of the eigenvectors . using the sherman - morrison formula one can show that the solution of the linear system of equations can be written as : \ ] ] to reduce the computation one can consider only a small selection , , based on a random sample of the training data .the eigenvalue decomposition of is therefore : one can show that the following relations exist between the eigenvalues and eigenvectors of the two matrices and respectively : such that : where and are the and respectively the block submatrices taken from , corresponding to the randomly selected columns . therefore , in this case only a matrix of size needs to be inverted , which simplifies the computation .while this method seems to scale as it has the difficulty of optimally choosing the support vectors from the training data set .it has been shown that an optimal method for choosing the support vectors is based on the maximization of the renyi entropy of the the matrix . therefore , one can use a heuristic greedy algorithm to search for the best set of support vectors , that maximize the renyi entropy of , however this approach becomes difficult in the case of large data sets .here we consider a much simpler approach , based on a committee machine made of classifiers .let us assume that the classifiers are characterized by the following randomly selected matrices : , , , .the random sampling can be done without replacement , such that after selections all the training data is used .an efficient randomization strategy would be to use a random shuffling function which generates a random permutation of the set , and then to select contiguous subsets with size from .after the algorithm consumes all the subsets one can re - shuffle in order to create another randomized list . for each classifier we solve the system : the solution is given by : where and are the moore - penrose pseudo - inverse matrices of the random block matrices and , which can be calculated at a lower cost of . in the end we aggregate the weights of the classifiers as following : and the classification process is performed using the equations ( 14 ) and ( 15 ) .the kaczmarz method is a popular solver for overdetermined linear systems of equations of the form : with .the method has numerous applications in computer tomography and image reconstruction from projections .assuming that is an initial estimation of the solution , the algorithm proceeds cyclically and at each step it projects the estimate onto a subspace normal to the row of , such that : because of its inherent cyclical definition , the convergence of the method depends on the order of the rows . a simple method to overcomethis problem is to select the rows of in a random order : where is a random function .it has been shown that if the rows are selected with a probability : where if the frobenius norm , then the algorithm converges in expectation exponentially , with a rate independent on the number of equations : where is the condition number of , with and the maximal and minimal singular values of respectively. this remarkable results shows that the estimate converges exponentially fast to the solution , in just iterations . also ,since each iteration requires time steps , the overall complexity of the method is , which is much smaller than required by the standard approach .another big advantage is the low memory requirement , since the method only needs the randomly chosen row at a given time , and it does nt operate with the whole matrix . this makes the method appealing for very large systems of equations .it is interesting to note that the matrix can be `` standardized '' , such that each of its rows has unit euclidean norm : where therefore , by using this simple standardization method , the random row selection can be done with uniform probability , since all the rows have an identical norm , . in this casethe iterative equation takes the form : \hat{a}_{r(i)},\ ] ] and is a uniformly distributed random function . in the case of our multi - class classification problem , we prefer a randomized block version of the kaczmarz method , in order to use more efficiently the computing resources .also we assume that the system of equations is standardized using the above described procedure . in this case at each iteration we randomly select a subset of the row indexes of , with a size , and we project the current estimate onto a subspace normal to these rows : ,\ ] ] where : ^{-1}\ ] ] is the moore - penrose pseudo - inverse of the matrix , and is the subvector of with the components from .the above equation is equivalent to : ,\ ] ] where is the exact solution of the system .therefore we have : .\ ] ] this can be rewritten as : .\ ] ] where . since is an orthogonal projection we have : \vert \leq \vertx(t ) - x \vert^2,\ ] ] and the algorithm is guaranteed to converge .in fact , for , the randomized block algorithm is equivalent to the simple ( one row at a time ) randomized algorithm , which guarantees exponential convergence. in our multi - class classification setting we solve simultaneously systems of equations , corresponding to the different classes : therefore , the iterative equations are : , \quad j=1,\dots , k,\ ] ] where and are the standardized versions of the randomly selected block matrix and the corresponding target subvector .these equations can be written compactly in a matrix form as following : .\ ] ] the iteration stops when no significant improvement for is made , or the number of iterations reach a maximum accepted value .once the matrix containing the weights and the bias is calculated , one can use the multi - class classifier defined by the equations ( 14)-(15 ) , in order to classify any new data sample .the matching pursuit ( mp ) method is frequently used to decompose a given signal into a linear expansion of functions selected from a redundant dictionary of functions. thus , given a signal , we seek a linear expansion approximation : where are the columns of the redundant dictionary , with . here , is a selection function which returns the index of the column from the dictionary .thus , we solve the minimization problem : starting with an initial approximation , at each new step the algorithm iteratively selects a new column from the dictionary , in order to reduce the future residual .therefore , from one can build a new approximation : by finding and that minimizes the residual error : the minimization condition : gives : and therefore we have : ^ 2 \leq \vert r_{m-1 } \vert^2.\ ] ] the index of the best column that minimizes the residual is given by : ^ 2.\ ] ] the mp algorithm can be easily extended for kernel functions. however , for large data sets this approach is not quite efficient due to the increasing time required by the search for the best column .we notice that according to ( 54 ) the algorithm converges even if the selection of the column is randomly done .therefore , here we prefer this weaker form , where the functions from the redundant dictionary are simply selected randomly . in our case, we consider that the dictionary corresponds to the matrix , and as before we consider a block matrix formulation of the weak matching pursuit algorithm .initially the matrix is empty , , and the residual is set to . at each iteration stepwe select randomly ( without replacement ) columns from , which we denote as the matrix , where is the random list of columns .thus , at each iteration step we solve the following system : from here we find the solution at time : the weights are updated as : here , is the block of containing only the rows from the random list .also , the new residual is updated as following : due to the orthogonality of these quantities we have : and the algorithm is guaranteed to converge .once the matrix is calculated , one can use the multi - class classifier defined by the equations ( 14)-(15 ) , in order to classify any new data sample .interestingly , one can easily combine the kaczmarz and the mp methods into a hybrid kaczmarz - mp kernel method . in both methods we have to select a random list of indexes ,therefore we can use the same list to perform a kaczmarz step followed by an mp step ( or vice versa ) .the cost of this approach is similar to the randomized nystrm method , since it requires two matrix pseudo - inverses per iteration step .in this section we provide several new numerical results obtained for the mnist and the cifar-10 data sets , that illustrate the practical applicability and the performance of the proposed methods .the mnist data set of hand - written digits is a a widely - used benchmark for testing multi - class classification algorithms .the data consists of training samples and 10,000 test samples , corresponding to different classes .all the images , , are of size pixels , with gray levels , and the intra - class variability consists in local translations and deformations of the hand written digits . some typical examples of images from the mnist data set are shown in figure 1 . while the mnist data set is not very large , it still imposes a challenge for the standard kernel classification approach based on ls - svm , because the kernel matrix has elements , which in double precision ( 64bit floating point numbers ) requires about 28.8 gb . in order to simulate even more drastic conditions we used a pc with only 8 gb of ram , and 4 cpu cores .also , all the simulation programs were written using the julia language. obviously , with such a limited machine , the attempt to solve the problem directly ( for example using the gaussian elimination or the cholesky decomposition ) is not feasible .however , we can easily use the proposed randomized approximation methods , and in order to do so we perform all the computations in single precision ( 32 bit floating point numbers ) , which provides a double storage capability , comparing to the case of double precision .thus , we trade a potential precision loss for extra memory storage space , in order to adapt to the data size . in our numerical experimentswe have used only the raw data , without any augmentation or distortion .it is well known that better results can be obtained by using more complex unsupervised learning of image features , data augmentation and image distortion methods at the pre - processing level .however , our goal here is only to evaluate the performance of the discussed methods , and therefore we prefer to limit our investigation to the raw data . to our knowledge ,the best reported results in the literature for the kernel svm classification of the mnist raw data have a classification error of , and respectively , and they have been obtained by combining ten kernel svm classifiers .we will use these values for comparison with our results .in our approach we used a simple pre - processing method , consisting in a two step normalization of each image , as following : where denotes the average .also , the images are `` vectorized '' by concatenating the columns , such that : . therefore , after pre - processing all the images are unit norm vectors : , .this normalization is useful because it makes all the inner products equal to the cosine of the angle between the vectors , which is a good similarity measure : , \quad \forall i , j=1,\dots , n.\ ] ] we have experimented with several kernel types ( gaussian , polynomial ) , and the best results have been obtained with a polynomial kernel of degree 4 : therefore all the results reported here are for this particular kernel function .also , the regularization parameter was always set to , and the classification error was simply measured as the percentage of the test images which have been incorrectly classified . the full kernel matrix : would still require 14.4 gb , which is not feasible with the imposed constraints , andtherefore we must calculate the kernel elements on the fly at each step . here, the exponent means that the power is calculated element - wise .also we assume that the vectorized images correspond to the rows of the matrix .the memory left is still enough to hold the matrix : which is required for the classification of the testing data .this matrix is not really necessary to store in memory since the classification can be done separately for each test image , once the weights and the biases have been computed .this matrix requires about 2.4 gb and it is convenient to store it just to be able to perform a fast measurement of the classification error after each iteration . for the iterative randomized kernel methods ( mnist data set ) : ( a ) nystrm method ; ( b ) kaczmarz method ; ( c ) mp method ;( d ) kaczmarz - mp method .here is the iteration time step and is the size of the random block submatrices . , width=566 ] for the randomized mp method with the fourier pre - processing ( mnist data set ) . here is the iteration time step and is the size of the random block submatrices.,width=377 ] in figure 2(a ) we give the results obtained for the randomized nystrm method .here we have represented the classification error as a function of the random block size and the number of aggregated classifiers .one can see that the method converges very fast , and only few classifiers are needed to reach a plateau for .unfortunately , the error is dependent on , and this result suggests that the method performs better for larger values of .this is the fastest method considered here , since it requires the aggregation of only a few classifiers , such that for the total number of necessary classifiers is only .the randomized kaczmarz method shows a different behavior , figure 2(b ) . in this casewe ran the algorithm for iterations using random data blocks of size .this method shows a much slower convergence , and the whole computation process needed almost 5 hours to complete ( seconds per iteration time step ) , including the time for the error evaluation at each step .after the first iteration step the classification error drops abruptly below , then the classification error decreases slowly , and fluctuates closer to , which is in very good agreement with the previously reported values .better results for the kaczmarz method can be obtained by simply averaging the weights and biases of several separately ( parallel ) trained classifiers , with different random seeds for the random number generator , and/or different block sizes .this form of aggregation is possible because the kernel matrix is the same for all classifiers , and the systems are linear in the dual space . assuming that we have classifiers , , each of them having the output : then the output of the averaged classifiers is : therefore , in the end one can store only the average values of the weights and the biases .the advantage of averaging is the decrease of the amplitude of the fluctuations in the classification error , which gives better and more confident results , and also increases the generalization capabilities . in figure 2(c ) we give the results for the randomized mp method .this method shows similar results to the randomized kaczmarz method , but it s convergence is faster and the fluctuations have a lower amplitude . also , the method is about twice faster , with seconds per iteration time step , for a random block of size of .again , the obtained result is in very good agreement with the previously reported results .the results for the hybrid kaczmarz - mp method are given in figure 2(d ) , and not surprisingly it shows inbetween convergence speed , and a classification error of , which confirms the previous results . in our last experimentwe have decided to modify the data pre - processing , in order to see if better results can be obtained .we only did a simple modification by concatenating the images with the square root of the absolute value of their fast fourier transform ( ) .since the fft is symmetrical , only the first half of the values were used , each image becoming a vector of 1176 elements .the new image pre - processing consists in the following steps : ^t , \quad i=1,\dots , n.\end{aligned}\ ] ] with this new pre - processing we used the randomized mp method and the results are shown in figure 3 .the classification error drops to , which means an improvement of over the previously reported results .the 85 images incorrectly recognized are shown in figure 4 .the cifar-10 dataset consists of 60,000 images .each image has pixels and 3 colour ( rgb ) channels with 256 levels .the data set contains 10 classes of equal size ( 6,000 ) , corresponding to objects ( airplane , automobile , ship , truck ) and animals ( bird , cat , deer , dog , frog , horse ) .the training set contains 50,000 images , while the test set contains the rest of 10,000 images . in figure 5we show the first 10 images from each class .obviously this data set is much more challenging than the mnist data set , showing a very high variation in each class .again , we only use the raw data without any augmentation and distortion , and without employing any other technique for advanced feature extraction and learning . in figure 6we give the results of the numerical experiments for the randomized mp method , with the random data block size . in the first experiment ( ) we used the simple two - step normalization described in ( 61)-(62 ) .thus , after the normalization steps each image becomes a unit vector with pixels , obtained by concatenating the columns from the three colour channels .we ran the randomized mp algorithm for steps , such that the classification error settled at .this is a good improvement over the random guessing `` method '' , which has an error . in the second experiment ( ) , before the two - step normalization we have applied a gaussian filter on each channel .the filter is centred in the middle of the image and it is given by : \right ) , \; i , j=1,\dots , l,\ ] ] where is the size of the image .the filter is applied element - wise to the pixels and it is used to enhance the center of the image , where supposedly the important information resides , and to attenuate the information at the periphery of the image .the best results have been obtained for a constant , and the classification error dropped to . in the third experiment ( ) we have used the gaussian filtering ( 76 ) and the fft normalization described by the equations ( 69)-(75 ) .this pre - processing method improved the results again and we have obtained an error of only . the resulted confusion matrix is given in table 1 .not surprisingly , one can see that main `` culprits '' are the cats and dogs .for the cats are mistaken for dogs , while for the dogs are mistaken for cats . for the randomized mp method ( cifar-10 data set ) . here is the iteration time step and is the size of the random block submatrices .the following pre - processing steps have been used : - two - step data normalization ( 61)-(62 ) ; - gaussian filtering ( 76 ) and two - step data normalization ( 61)-(62 ) ; - gaussian filtering ( 76 ) and fft normalization ( 69)-(75 ) ., width=340 ]in this paper we have discussed several approximation methods for the kernel ls - svm multi - class classification problem .these methods use randomized block kernel matrices , and their main advantages are the low memory requirements and the relatively simple iterative implementation , significantly reducing the complexity of the problem .another advantage is that these methods do not require complex parameter tuning mechanisms .the only parameters needed are the regularization parameter and the size of the random block matrices . despite of their simplicity, these methods provide very good accuracy and reliable scaling to relatively large multi - class classification problems . also , we have reported new numerical results for the mnist and cifar-10 data sets , and we have shown that better results can be obtained by using a simple fourier pre - processing step of the raw data .the results reported here for the mnist raw data set are in very good agreement with the previously reported results using the kernel svm approach , while the results for the cifar-10 data are surprisingly good for the small amount of pre - processing used .t. hofmann , b. schlkopf , a. j. smola , _ kernel methods in machine learning , _ the annals of statistics 36(3 ) , 1171 ( 2008 ) .j. mercer , _ functions of positive and negative type and their connection with the theory of integral equations , _ philos .ser . a math .a 209 , 415 ( 1909 ) . c. cortes , v. vapnik , _ support vector networks _ , machine learning 20 , 273 ( 1995 ) .suykens , j. vandewalle , _ least squares support vector machine classifiers _ , neural processing letters 9(3 ) , 293 ( 1999 ) .e. j. nystrm , _ ber die praktische auflsung von integralgleichungen mit anwendungen auf randwertaufgaben " , _ acta mathematica 54(1 ) , 185 ( 1930 ) .s. kaczmarz , _ angenherte auflsung von systemen linearer gleichungen , _ bulletin international de lacadmie polonaise des sciences et des lettres .classe des sciences mathmatiques et naturelles .srie a , sciences mathmatiques , 35 , 355 ( 1937 ) . s. g. mallat , z. zhang , _ matching pursuits with time - frequency dictionaries , _ ieee transactions on signal processing 41(12 ) , 3397 ( 1993 ) .c.k.i williams , m. seeger , _ using the nystrm method to speed up kernel machines , _proceedings neural information processing systems 13 , 682 ( 2001 ) .j. sherman , w.j .morrison , _ adjustment of an inverse matrix corresponding to a change in one element of a given matrix , _ annals of mathematical statistics 21 , 124 ( 1950 ) .m. girolami , _orthogonal series density estimation and the kernel eigenvalue problem , _ neural computation 14 , 669 ( 2002 ) .t. strohmer , r. vershynin , _ a randomized kaczmarz algorithm for linear systems with exponential convergence _ ,journal of fourier analysis and applications 15 , 262 ( 2009 ) .p. vincent , y. bengio , _ kernel matching pursuit , _ machine learning 48 , 169 ( 2002 ) .y. lecun , l. bottou , y. bengio , p. haffner , _ gradient - based learning applied to document recognition , _ proceedings of the ieee 86(11 ) , 2278 ( 1998 ) .a. krizhevsky , g. hinton ._ learning multiple layers of features from tiny images , _ computer science department , university of toronto , tech . rep . , 2009 .j. bezanson , s. karpinski , v.b .shah , a. edelman , _ julia : a fast dynamic language for technical computing _ , arxiv:1209.5145 ( 2012 ) .burges , b. schlkopf , _ improving the accuracy and speed of support vector machines _ , advances in neural information processing systems 9 , 375 ( 1997 ) . | the least - squares support vector machine is a frequently used kernel method for non - linear regression and classification tasks . here we discuss several approximation algorithms for the least - squares support vector machine classifier . the proposed methods are based on randomized block kernel matrices , and we show that they provide good accuracy and reliable scaling for multi - class classification problems with relatively large data sets . also , we present several numerical experiments that illustrate the practical applicability of the proposed methods . keywords : kernel methods ; multiclass classification . pacs : 07.05.mh , 02.10.yn ; 02.30.mv |
macroscopic properties of granular materials such as soils depend on particle interactions . in dry granular materials , interparticle forcesare related to the applied external loads as different studies have shown . in unsaturated soils subjected to capillary effects , new features must be accounted for in order to understand properly their behaviour .the presence of water leads to the formation of water menisci between neighboring grains , introducing new interparticle forces .the effects of these forces depend on the degree of saturation of the medium . for low water content level corresponding to disconnected liquid bridges between grains ,capillary theory allows the force induced by those bridges to be linked to the local geometry of the grains and to the matric suction or capillary pressure inside the medium .since the disconnected menisci assumption is not valid for high water content levels due to water percolation , we consider here only the unsaturated state where the discontinuity of the water phase can be assumed , the so - called pendular regime .+ there has been a wide debate on the various interpretations for the mechanical behaviour of unsaturated soils . at early stages of soil mechanics ,terzaghi first introduced the concept of effective stress for the particular case of saturated soils enabling the conversion of a multiphase porous medium into a mechanically equivalent single - phase continuum . in unsaturated soils ,water induced stresses are still debated .the common practice is to use the suction or a modified version as a second stress variable within a complete hydro - mechanical framework .an alternative method is introduced to develop homogenisation techniques in order to derive stress - strain relationships from forces and displacements at the particle level as proposed in for dry granular materials .the basic idea is to consider the material as represented by a set of micro - systems , postulating that the behaviour of a material volume element depends on the intergranular interactions belonging to this volume .we propose here to extend this micro - mechanical approach to unsaturated granular materials as proposed by li , jiang et al . or lu and likos .+ along these lines we present two micromechanical models which take into account capillary forces .the first one is a three dimensional numerical model based on the discrete element method ( hereafter designed as the dem model ) pioneered by cundall and strack , and the second one is an analytical model ( hereafter designed as the microstructural model ) recently proposed by hicher and chang .the microstructural model is a stress - strain relation which considers interparticle forces and displacements .thanks to analytical homogenisation / localisation techniques , the macroscopic relation between stress and strain can be derived . in the dem model , a granular mediumis modelled by a set of grains interacting according to elementary laws .direct simulations are carried out on grain assemblies , computing the response of the material along a given loading path . by studying their effects under triaxial loading, we investigate capillary forces implications at the macroscopic level , and offer an insight into the unsaturated soil stress framework by introducing a capillary stress tensor as a result of homogenization techniques .macroscopic interpretations of the mechanical behaviour of unsaturated soils have been mainly developed in the framework of elastoplasticity .most of these models consider that the strain tensor is governed by the net stress tensor ( being the pore air pressure ) and the matric suction or capillary pressure ( being the pore water pressure ) inside the medium . in particular , they consider a new yield surface , called loading collapse ( lc ) surface in the plane ( ( ),( ) ) which controls the volume changes due to the evolution of the degree of saturation for a given loading path . as a matter of fact , all these formulations can be considered as extensions of the relationship initially proposed by bishop and blight for unsaturated soils : where is called the effective stress parameter or bishop s parameter , and is a function of the degree of saturation of the medium ( for a dry material , for a fully saturated material ) .+ obviously , since the effective stress principle is by definition a macroscopic concept , several authors ( lu and likos or li ) have proposed to use a micromechanical approach for the effective stress principle . in order to further study this micromechanical approach to study unsaturated soilstresses , we propose here a micromechanical analysis of the problem , examining the local water induced effects through a set of simulated laboratory experiments .let us consider a representative volume element ( rve ) of a wet granular material , subjected to an assigned external loading .when the water content decreases inside a saturated granular sample , the air breaks through at a given state . the capillary pressure ( ) corresponding to that pointis called the air - entry pressure , and strongly depends on the pore sizes .thereafter , the sample becomes unsaturated and capillary forces start to grow due to interface actions between air and water .since the the gaseous phase is discontinue , this is the capillary regime . from this state , a constant decrease in the degree of saturation corresponds to a gentle increase in pore water pressure .the pendular regime starts when the water phase is no longer continuous . in this state ,fluids equilibrium is obtained by the vapor pressure .analytical and experimental results demonstrate that capillary effects at particle contacts produces a kind of bond between particles as a result of menisci ( fig.[fig1 ] ) .liquid bridges may form between some pairs of adjoining particles not necessarily in contact , generating an attractive capillary force between the bonded particles .if the drying process continues , these water bridges begin to fail , starting from the non - contacting grains , until the complete disappearance of capillary forces inside the assembly . + as the pendular regime is considered throughout this paper , water is considered to be solely composed of capillary menisci : each liquid bridge is assumed to connect only two particles .therefore , two types of forces coexist within the granular medium . for dry contacts, a contact force develops between contacting granules .this repulsive force , that is a function of the relative motion between the contacting grains , is usually well described by an elastoplastic contact model . for water bonded particles , a specific attractive force exists .this water - induced attractive interaction can be described by a resulting capillary force , rather than by a stress distribution as mentionned by haines or fisher .this capillary force is a function of the bridge volume , of the size of particles , and of the fluid nature ( see section 3.1.1 for the details ) .the objective of this section is to derive , in a simple manner , an expression relating the overall stress tensor within the rve to this internal force distribution .+ for this purpose , the love static homogenisation relation is used .this relation expresses the mean stress tensor within a granular volume as a function of the external forces applied to the particles belonging to the boundary of the volume : where are the coordinates of the particle with respect to a suitable frame .it is worth noting that this relation is valid whatever the nature of the interactions between grains .+ taking into account the mechanical balance of each particle of the volume ( including the boundary ) , eq.(2 ) can be written as : where is the number of particles within the volume , is the interaction force exerted by the particle onto the particle , and is the branch vector pointing from particle to particle ( ) .+ as we consider partially saturated granular media , two independent kinds of interparticle forces can be distinguished : 1 . if particles and are in contact , a contact force exists2 . if particles and are bonded by a liquid bridge , a capillary force exists . actually ,depending on the local geometry , a liquid bond can exist between two grains in contact . in that case ,solid contacts are surrounded by the continuous liquid phase providing the simultaneity of contact and capillary forces .the two contributions have therefore to be accounted for by summation .+ finally , in all cases and for any couple ^ 2 $ ] , it can be written that : thus , by combining eqs.(2 ) and ( 4 ) , it follows that : as a consequence , eq.(5 ) indicates that the stress tensor is split into two components : 1 . a first component accounting for the contact forces transmitted along the contact network .2 . a second component representing the capillary forces existing within the assembly . it is to be noted that is a stress quantity standing for intergranular contact forces in the same way as in saturated or dry conditions . considering the concept as initially introduced by terzaghi, plays the role of the so - called effective stress by governing soil deformation and failure . besides , is the tensorial attribute to capillary water effects or suction , by extension .by analogy with eq.(1 ) , we can therefore define a microstructural effective stress where could be affiliated to net stress , representing the apparent stress state in the material .compared with eq.(1 ) , where the effect of water is intrinsically isotropic , implies a tensorial attribute to the water effects .+ in fact , in both terms and , a fabric tensor can emerge from the summation .the fabric tensor is useful to characterize the contact anisotropy of the assembly , which is known as a basic feature of granular assemblies . in dry granular materials ,an induced anisotropy can develop when a deviatoric stress loading is applied . in partially saturated assembly , due to the possibility of interactions without contact ,the conclusion is not so trivial . as an illustration ,if we restrict our analysis to spherical particles , it can be inferred that : this relation points out that , contrary to the contact term where the fabric tensor is directly linked to the induced anisotropy ( a contact is associated with a force ) , two causes can be invoked for the capillary term .first , the distribution of the liquid bonds can be anisotropic .secondly , the geometry of the bonds being obviously dependent on the local geometry , it is possible that the distribution of both terms and is also anisotropic .this is significant because the anisotropic attribute yields shear effects associated with pore fluid , which could mostly influence material behaviour . + in order to enrich our discussion , we present numerical investigations of these features in the following sections using both dem and micromechanical simulations .we present here a numerical analysis of the stress variables using a micromechanical model based upon the discrete element method initially introduced by cundall and strack .this technique starts with basic constitutive laws between interacting particles and can provide a macroscopic response of a particle assembly due to loading changes at the boundaries .each particle of the material is a rigid sphere identified by its own mass , , radius , and moment of inertia , . for every time step of the computation , interaction forces between particles , and consequently resulting forces acting on each of them ,are deduced from sphere positions through the interaction law .newton s second law is then integrated through an explicit second order finite difference scheme to compute the new sphere positions .a 3d software called yade ( yet another dynamic engine ) , kozicki and donz , has been enhanced in order to properly simulate partially saturated granular material features .the contact interaction is described by an elastic - plastic relation between the force and the relative displacement of two interacting particles .a normal stiffness is therefore defined to relate the normal force to the intergranular normal relative displacement : and a tangential stiffness allows us to deduce the shear force induced by the incremental tangential relative displacement ; this tangential behaviour obeying the coulomb friction law : where is the coulomb friction coefficient defined as , with the intergranular friction angle . + in the work presented here , and are dependent functions of the interacting particle sizes and of a characteristic modulus of the material denoted as : this definition results in a constant ratio between and the effective bulk modulus of the packing , whatever the size of the particles . + for simplicity , we assume that the water inside the sample is solely composed of capillary water as defined in the pendular state , with a discontinuous liquid phase . much attention has been given to these pendular liquid bridges , .their exact shape between spherical bodies is defined by the laplace equation , relating the pressure difference across the liquid - gas interface to the mean curvature of the bridge and the surface tension of the liquid phase : in the cartesian coordinates of fig.[fig1](b ) the two curvature radii and ( fig.[fig1](a ) ) , are given by : and where defines the profile of the liquid - gas interface curve .the axis coincides with the axis of symmetry of the bridge , passing through the centers of the connected spheres ( fig.[fig1](b ) ) .according to the laplace equation , the profile of the liquid bridge is thus related to the capillary pressure through the following differential equation : the corresponding liquid bridge volume and intergranular distance can be obtained by considering the -coordinates ( and ) of the three - phases contact lines defining the solid - liquid - gas interface , as defined by souli et al., : and the capillary force due to the liquid bridge can be calculated at the profile apex according to the ` gorge method ' and consists of a contribution of the capillary pressure as well as the surface tension : the relation between the capillary pressure and the configuration of the capillary doublet is thus described by a system of non - linear coupled equations ( 15 , 16 , 17 , 18 ) where the local geometry ( ) and water volume arise as a result of the solved system .so , to account for capillarity in the code , an interpolation scheme on a set of discrete solutions of the laplace equation has been developed in order to link directly the capillary pressure to the capillary force and water volume of the liquid bridge for a given grain - pair configuration .this results in a suction - controlled model where , at every time - step during the simulation , capillary forces and water volumes ( ) are computed based upon the microstructure geometry and the imposed suction level . a schematic diagram of the implemented capillary law is shown in fig.[fig2 ] for a given value of the suction . with the intergranular distance for a given suction value : a meniscus can form for and breaks off for .,width=283 ] in this paper , the choice was made to define the appearance of a bridge when particles come strictly in contact ( ) , neglecting the possibility of adsorbed water effects .the capillary force is considered constant for the range of the elastic deformation ( ) , assuming local displacements to be very small compared to particle radii .let us note that the formulation intrinsically defines the distance from which the meniscus breaks off as depending on the given capillary pressure and on the local geometry .this maximum distance corresponds to the minimum value from which the laplace equation has no solution .since this study covers the macroscopic and microscopic aspects of unsaturated granular media , stress tensors are calculated by both macro and micro - methods .the macro - method is the conventional way used in laboratory to measure stresses in experiments , that is to say : where is the normal force acting on the boundary , and the surface of the boundary oriented by the normal direction .this stress is equivalent to net stress used in unsaturated soils mechanics , with used as the reference pressure because the pore air pressure is effectively zero in many practical problems ( as well as in this study ) . +as seen in section 2.2 , two other stress tensors can be considered through homogenization techniques : the intergranular stress tensor computed from intergranular forces , and the capillary stress tensor computed from capillary forces .the studied particle assembly is a 1 mm length cubic sample composed of 10000 spheres , with a grain size distribution ranging from 0.025 mm to 0.08 mm , as shown in fig.[fig3 ] , and a porosity of 0.385 .the input parameters are listed in table 1 , referring to equation ( 10 ) ..dem model parameters [ cols="^,^,^,^ " , ] + fig.[fig11 ] presents the numerical simulations of three triaxial tests performed at three different confining pressures of , and for the simulated dry assembly having an initial void ratio equal to .one can see that the results obtained with the microstructural model compared well to dem ones .dem simulations were performed on unsaturated assemblies with an initial saturation degree of about , corresponding to a suction value equal to . in order to determine the corresponding capillary forces ,eq.(25 ) includes two material parameters and which control the evolution of the water induced forces with the distance between two particles . according to experimental results presented in different studies , we decided to take a value of and of .these values give a standard evolution for the capillary forces , function of the distance between particles , as well as an initial isotropic distribution of these forces if the material structure is isotropic .the evolution of the capillary forces with the degree of saturation requires two more parameters : and . according to previous studies , we selected a value of and determined the value of . in order to obtain an initial value of the capillary stress in accordance with the one computed by dem .we then performed numerical simulations of triaxial tests on wet samples for different confining pressures .contrary to dem suction - controlled simulations , the microstructural model tests are water content controlled . in order to compare those two approaches, samples were initially wetted at a common degree of saturation of about .one should notice that , even if the test conditions are not strictly identical , the changes in the degree of saturation during loading obtained for both tests are sufficiently similar for us ( fig.[fig12 ] ) to compare the results obtained by the two approaches .+ as presented in fig.[fig13 ] , the two models give quite similar results .one can see that a material strength increase is obtained for unsaturated samples compared to dry ones at the same confining pressure .the volume changes during triaxial loading create a small change in the degree of saturation ( fig.[fig12 ] ) . as a consequence ,the capillary forces evolve , according to eq.(5 ) .fig.[fig14 ] shows the evolution of the principal components of the capillary stress tensor during constant water content triaxial tests .the initial state corresponds to an isotropic capillary stress tensor with a mean stress equal to as obtained in dem . during loading, a structural anisotropy is created due to the evolution of the fabric tensor defined in eq.(38 ) .therefore , the principal components of the capillary stress tensor evolve differently .in the studied cases , this difference remains small and corresponds at the end of the test to a relative difference less than .this small difference can be explained by two causes .the first one is the small amount of induced anisotropy obtained by the evolution of the fabric tensor .this evolution is due to the change in the branch length for each direction which , in this version of the model , is only due to elastic deformations of the grains in contact .since all our numerical testing were performed at small confining stresses , the amount of elastic deformation remained quite small .the second reason is linked to the fact that the capillary bridges can exist for non - touching neighboring grains .this has been taken into account in calculating the mean capillary force and also in determining the capillary stress tensor ( eqs.(27 ) and ( 41 ) ) .this result is in agreement with the distribution of the contacts and menisci distribution computed by dem ( figs.[fig8 ] and [ fig9 ] ) .+ regarding the constitutive behaviour at contacts between solid particles , the results provided by the two methods are in rather good agreement concerning the influence of capillary forces at a macroscopic level .the increase in the shear strength classically observed for partially saturated materials is clearly encountered starting from microscopical considerations , and the slight induced anisotropy of the capillary stress tensor is observed , confirming that suction effects in unsaturated materials can not be precisely accounted for by an equivalent pore pressure assumption .starting from local capillary forces , a stress variable , denoted as the capillary stress tensor and intrinsically associated to water effects , has been defined through homogenisation techniques .triaxial compression test simulations from two fundamentally different micromechanical models were performed on a granular assembly under several confining pressures for dry and partially saturated conditions .both models reproduce in quite good agreement the main features of unsaturated granular materials , in particular the increase of the shear strength due to capillary effects .+ the results also suggest that , in partially saturated materials within the pendular regime , the effects of pore fluid are adequately represented by a discrete distribution of forces rather than by an averaged pressure in the fluid phase .effectively , as a representative quantity of the pore fluid distribution inside unsaturated materials , this suction associated stress tensor reveals that pore fluid has its own fabric which is inherently anisotropic and strongly dependent on the combined loading and hydric history of the material .even if the induced anisotropy of the capillary stress tensor appears slight in this study , it is obvious that this tensorial nature of water in unsaturated material implies suction to produce shear effects on the solid phase .this suction induced shear effect consequently makes it difficult to associate an isotropic quantity to water as expressed in the bishop s effective stress .pore pressure is no longer an isotropic stress in unsaturated soil , and therefore , can not be considered as an equivalent continuum medium .the analysis finally confirms that suction is a pore - scale concept , and that stress definitions for unsaturated soils should also include microscopic interparticle stresses as the ones resulting from capillary forces .+ the multi - scale approach presented here appears to be a pertinent complementary tool for the study of unsaturated soil mechanics .more precisely , discrete methods should convey new insights into the discussion about the controversial concept of generalized effective stress by relating basic physical aspects to classical phenomenological views .nuth m , laloui l. effective stress concept in unsaturated soils : clarification and validation of a unified framework .journal for numerical and analytical methods in geomechanics _ 2008 ; * 32*:771 - 801 .rothenburg l , selvadurai aps .micromechanical definition of the cauchy stress tensor for particulate media ._ mechanics of structured media _ 1981 ; selvadurai a.p.s . editor .amsterdam , the netherlands : elsevier .469 - 486 souli f , cherblanc f , el youssoufi ms , saix c. influence of liquid bridges on the mechanical behaviour of polydisperse granular materials .journal for numerical and analytical methods in geomechanics _ 2006 ; * 30*:213 - 228 .de buhan p and dormieux l. on the validity of the effective stress concept for assessing the strength of saturated porous materials : a homogenization approach . _ journal of the mech . and phys . of solids_ 1996 ; * 44*(10):1649 - 1667 .dangla p , coussy o , eymard r. non - linear poroelasticity for unsaturated porous materials : an energy approach ._ poromechanics , a tribute to m.a .biot_. proceedings of the biot conference on poromechanics , balkema , 1998 ; 59 - 64 .fleureau j - m , hadiwardoyo s , gomes correia a. generalised efective stress analysis of strength and small strains behaviour of a silty sand , from dry to saturated state ._ soils and foundations _ 2003 ; * 43*(4):21 - 33 . | this paper presents a micromechanical study of unsaturated granular media in the pendular regime , based upon numerical experiments using the discrete element method , compared to a microstructural elastoplastic model . water effects are taken into account by adding capillary menisci at contacts and their consequences in terms of force and water volume are studied . simulations of triaxial compression tests are used to investigate both macro and micro - effects of a partial saturation . the results provided by the two methods appear to be in good agreement , reproducing the major trends of a partially saturated granular assembly , such as the increase in the shear strength and the hardening with suction . moreover , a capillary stress tensor is exhibited from capillary forces by using homogenisation techniques . both macroscopic and microscopic considerations emphasize an induced anisotropy of the capillary stress tensor in relation with the pore fluid distribution inside the material . in so far as the tensorial nature of this fluid fabric implies shear effects on the solid phase associated with suction , a comparison has been made with the standard equivalent pore pressure assumption . it is shown that water effects induce microstrural phenomena that can not be considered at the macro level , particularly when dealing with material history . thus , the study points out that unsaturated soil stress definitions should include , besides the macroscopic stresses such as the total stress , the microscopic interparticle stresses such as the ones resulting from capillary forces , in order to interpret more precisely the implications of the pore fluid on the mechanical behaviour of granular materials . [ published , doi : 10.1002/nag.767 ] , , , , . micromechanics ; granular materials ; unsaturated ; dem ; capillary forces ; microstructure |
urban crowds are characterised by the presence of a large number of social groups .the ratio between individual pedestrians and pedestrians moving in groups may change considerably between different environments and at different times of the day , but it is in general never negligible , with groups representing up to 85% of the walking population . despite this empirical evidence about the importance of groups , the standard approach in microscopic ( agent - based )pedestrian modelling has been for long time to assume that the crowd is composed of individuals , moving without any preferential ties to other pedestrians .this is an extremely strong simplification of the system , although it was obviously understandable as a first approach to the problem .nevertheless , it is intuitive that groups behave in a specific way ( they move together and close ) and their presence should clearly influence the dynamics of the crowd .not taking in account the group component of crowds may have an impact on the planning of buildings and emergency evacuation plans .for example , reports that around 48% of people that evacuated by foot the city of sendai ( miyagi , japan ) during the 2011 tohoku great earthquake and tsunami did it by moving in groups , with a probable effect on evacuation times ( for the smaller city of kamaishi , in iwate prefecture , the figure was as high as 71% ) .furthermore an understanding of pedestrian behaviour is essential for robots and automatic navigation vehicles ( such as wheelchairs or delivery carts ) that will become arguably common in future pedestrian areas .indeed , in recent years , a few studies concerning empirical observations and mathematical modelling of the groups characteristic configuration and velocity have been introduced . in a recent series of papers , we focused on the development of a mathematical model to describe group interaction , and in specific the group spatial structure and velocity .the model proposed in introduced a non newtonian potential for group interaction on the basis of few and intuitive ideas about social interaction in pedestrian groups , and its predictions for group size , structure and velocity are in agreement with the observed natural behaviour of pedestrians . in studied a large data set of pedestrian trajectories to see how an _ extrinsic _ , i.e. , environmental , property such as crowd density influences the dynamics of groups , and in we introduced a mathematical model to explain such a crowd density effect on groups .nevertheless , we may expect that a social behaviour such as walking in groups depends also on _ intrinsic _ properties of the groups .it is known by studies with subjects that age , gender and height affect walking speed , but here we are interested on how group behaviour is affected by the nature of the group itself : not only by the characteristics of the individuals that compose it , but also by the relation between them , which is expected to have a strong impact on group dynamics ( see and our preliminary study ) . to study natural human behaviourwe use a large _ ecological _( i.e. obtained by observing unconstrained pedestrians in their natural environment , see ) data set to describe how the group spatial structure , size and velocity of _ dyads _ ( two people groups ) change based on the following intrinsic properties of groups : 1 .* purpose * of movement , 2 .* relation * between the members , 3 . * gender * of the members , 4 .* age * of the members , 5 .* height * of the members .being the data set based on unconstrained trajectories of unknown pedestrians , such features are necessarily ( with the exclusion of pedestrian height , obtained automatically through our tracking system ) _ apparent _ , i.e. based on the judgement of human coders , and thus an analysis of their reliability is performed .furthermore , being social behaviour cultural dependent , the results are probably influenced by the place in which data were collected ( a shopping mall in osaka , western japan ) . nevertheless , they provide a useful insight on how these intrinsic features affect in a quantitative way the behaviour of dyads .the pedestrian group data base used for this work is based on the freely available set , introduced by .this set is again based on a very large pedestrian trajectory set , collected in a m area of the asia and pacific trade center ( atc ) , a multi - purpose building located in the osaka ( japan ) port area . for the purpose of this work , in order to avoid taking in consideration the effect of architectural features of the environment , such as its width , we use data only from the corridor area as defined in .+ the trajectories have been automatically tracked using 3d range sensors and the algorithm introduced in , which provides , along with the pedestrian position on the plane , the height of their head , for more than 800 hours during a one year time span .at the same time , we video recorded the tracking area using 16 different cameras . a subset of the video recordings were used by a human coder to identify the pedestrian social groups reported in data set .the data set concerns the natural behaviour of pedestrians , i.e. the pedestrians were behaving in an unconstrained way , and observed in their natural environment .collecting data in the pedestrians natural environment obviously presents some technical problems and an overall lower quality in tracking data ( higher tracking noise ) , but it is an approach with growing popularity , that allows for removing possible influence on pedestrian behaviour due to performing experiments in laboratories , i.e. artificial environments , using selected subjects following the experimenters instructions .this is extremely important for this study , since we may hardly believe that social pedestrian group behaviour could be observed in such controlled laboratory experiments .the pedestrians in this data set are all _ socially interacting _, i.e. they were , on the basis of conversation and gaze clues , coded as not only moving together , but also performing some kind of social interaction . in order to obtain the `` ground truth '' for the inter - group composition and social relation , we proceeded similarly to our previous works , and asked three different human coders to observe the video recordings corresponding to the data set and analyse the group composition , and in detail to code , when possible , 1 .the apparent purpose of the group s visit to the area ( * work * or * leisure * ) , 2 .the apparent gender of their members , 3 . their apparent relation ( * colleagues * , * friends * , * family * or * couples * ) , 4 . their apparent age ( in decades , such as 0 - 9 , 10 - 19 , etc . )while one coder examined data from five different days ( three working days and two holidays ) , corresponding to 1168 different socially interacting dyads , the other two examined only one day ( 283 dyads ) .the coders are not specialised in pedestrian studies , are not aware of our mathematical models of pedestrian behaviour , and did not have access to our quantitative measurements of position and velocity .they thus relied only on visual features such as clothing , gestures , behaviour and gazing to identify the groups social roles and composition .the coding process is obviously strongly dependent on the subjective evaluation of the coder .nevertheless , the 283 dyads examined by all coders may be used to examine their agreement , and provide thus some information about the reliability of their coding .to this end , we use in appendix [ coderel ] two different approaches . on one hand, we use the standard approach used in social sciences of directly comparing the results of the coding , through statistical indicators such as cohen s and fleiss s kappa , or krippendorf s alpha ( appendix [ coderag ] ) . on the other hand, we also use an approach more rooted in the `` hard '' sciences , and treat the different codings as independent experiments , and quantitatively and quantitatively compare the findings ( appendix [ codercomp ] ) .while our tracking system provides us with pedestrian positions and velocities at time intervals in the order of tens of milliseconds , we average pedestrian positions over time intervals s , to reduce the effect of measurement noise and the influence of pedestrian gait . as a result , we obtain pedestrian positions at discrete times , as where gives the height of the top of the pedestrian head , and define pedestrian velocities in 2d as .\ ] ] following , only data points with both the average group velocity ( eq . [ ev ] ) and all individual velocities larger 0.5 m / s , and with all pedestrian positions falling inside a square with side 2.5 m centred on the group centre , were used .pedestrian height measurement is obviously subject to oscillations ( see ) .a major problem with height tracking is that there are situations in which the head is hidden or poorly tracked , and the pedestrian height is wrongly assigned as the height of the shoulders . to avoid this problem , for each pedestrianwe first compute the median height over the whole trajectory , and then define the pedestrian height as the average of all measurements above the median we use only data points for which the distance between pedestrians in the group was less than 4 meters .this is a conservative threshold justified by our findings in suggesting that interacting groups have extremely low probability of having a distance larger than 2 meters . ] . by being smaller andthus more easily occluded , the tracking of children is more difficult than the tracking of adults . since our statistical analysis identified a few interesting results related to children , we visually analysed the video recordings to verify that this problem did not affect significantly our findings .as we report in , group velocity and spatial configuration depend on crowd density . in this paperwe follow again our main analysis of and compute pedestrian density with a good spatial resolution ( more than a good time resolution ) as time averages over 300 seconds in a meters square area .more details , along with a discussion of possible density definitions , may be found in ( refer also to and for possible alternative definitions of density ) .based on our analysis performed in works , we define the following quantitative observables for the dynamics of a pedestrian dyad ( fig [ f0 ] ) : 1 .the _ group velocity _ , defined as being the velocities of the two pedestrians in an arbitrary reference frame co - moving with the environment ( i.e. in which the velocity of walls and other architectural features is zero ) .2 . the _ pedestrian distance _ or _ group size _ , defined as being the positions of the two pedestrians in the above reference frame .the _ group abreast distance _ or _abreast extension _ , that may be defined as follows .first we identify a unit vector in the direction of for each pedestrian we compute the clockwise angle between and , and define the projection of each orthogonal to the velocity if necessary , we rename the pedestrians so that and finally define 4 .we can also define the group extension in the direction of motion , this is a signed quantity , a property that was particularly useful in our previous works .nevertheless , for the purpose of this paper , it resulted more useful to analyse the _ group depth _ as described in detail in appendix [ detstat ] , to which the reader should refer for technical details , for each observable and relation or composition category we provide the number of groups , the category average , standard deviation and standard error , all based on the analysis of groups that contributed with at least 10 usable data points , and reported in tables as furthermore we provide an analysis of the overall observable probability distribution function , and some parameters to estimate the differences between categories ( anova values , effect size , coefficient of determination ) .the cross - analysis regarding the common effect of different `` factors '' ( i.e. , purpose , relation , gender , age and height ) may be found in appendix [ accounting ] .the results related to the purpose dependence of all observables concerning the 1088 dyads whose purpose was coded ( and that provided enough data to be analysed ) are shown in table [ table1a ] ( refer to appendix [ detstat ] for an explanation of all terms ) . & & + leisure & 716 & 1118 7.3 ( =195 ) & 815 9.5 ( =253 ) & 628 6.1 ( =162 ) & 383 12 ( =334 ) + work & 372 & 1271 8.2 ( =158 ) & 845 12 ( =228 ) & 713 8 ( =154 ) & 332 15 ( =289 ) + & & 169 & 3.75 & 69.4 & 6.25 + & & & 0.053 & & 0.0126 + & & 0.135 & 0.00344 & 0.0601 & 0.00572 + & & 0.832 & 0.124 & 0.533 & 0.16 + we have thus a very strong and significant evidence that pedestrians that visited the environment for working walk at an higher velocity and with a larger abreast distance , as shown by the comparison of averages and standard errors , and by the corresponding high and and low values ( see appendix [ detstat ] for definitions and meaning of these quantities ) .we have also a difference in distance and `` group depth '' , although its significance is less strong .we can get further insight about the differences in behaviour between workers and leisure oriented people by studying explicitly the probability distribution functions for the observables , , and , which are shown respectively in figures [ f1d ] , [ f1a ] , [ f1b ] and [ f1c ] , and whose statistical analysis is reported in appendix [ overpur ] ( refer again to appendix [ detstat ] for the difference between the analysis reported in the main text an the one of appendix [ overpur ] ) .observable in leisure ( blue , centre of bin identified by circles ) and work ( red , squares ) oriented dyads .all pdfs in this work are shown after having been smoothed with a moving average filter . ]observable in leisure ( blue , circles ) and work ( red , squares ) oriented dyads . ] observable in leisure ( blue , circles ) and work ( red , squares ) oriented dyads . ] observable in leisure ( blue , circles ) and work ( red , squares ) oriented dyads . ]these pdfs provide an easy interpretation of the data .the and peaks and tails are displaced to higher values for workers .the peak is also displaced to an higher value , but leisure distribution has a fatter tail . correspondingly , the distribution is slightly more spread in leisure oriented pedestrians .furthermore , the distribution presents a considerably higher value for low values in the `` leisure '' case .these latter consideration show that while `` workers '' walk strongly abreast , the `` leisure '' dyads are less ordered .in appendix [ furtherpur ] we further analyse these results , to understand the effect on them of age , gender , density and height , while in appendix [ coderpur ] we verify that the major findings are confirmed by all coders .we may see that in general , even when age , gender , density and height are kept fixed , the results exposed above are confirmed by this further analysis .groups may also be analysed according to the relation between their members ( colleagues , couples , friends , family ) .there is obviously a strong overlap between the `` colleagues '' category and the `` work '' one analysed above ( and obviously between `` leisure '' and the three categories couples , friends and families ) , but since they are are conceptually different ( colleagues could visit the shopping mall for lunch , or for shopping outside of working time ) , we will provide an independent analysis we usually drop the analysis of purpose and focus on relation . ] .the results related to the relation dependence for all observables concerning the 1018 dyads whose purpose was coded ( and that provided enough data to be analysed ) are shown in table [ table2a ] . & & + colleagues & 358 & 1274 8.3 ( =157 ) & 851 12 ( =231 ) & 718 8.3 ( =157 ) & 334 15 ( =292 ) + couples & 96 & 1099 17 ( =169 ) & 714 22 ( =219 ) & 600 15 ( =150 ) & 291 24 ( =231 ) + families & 246 & 1094 13 ( =197 ) & 863 19 ( =302 ) & 583 11 ( =171 ) & 498 25 ( =391 ) + friends & 318 & 1138 11 ( =200 ) & 792 11 ( =199 ) & 662 7.5 ( =134 ) & 314 15 ( =268 ) + & & 60.7 & 12.2 & 42.3 & 21.4 + & & & 7.39 & & + & & 0.152 & 0.0349 & 0.111 & 0.0595 + & & 1.03 & 0.529 & 0.828 & 0.587 + we may see that , as expected from the previous analysis , there is a considerable difference between the velocity of colleagues and the velocity of the other groups .friends appear to be faster than couples or families , although the difference is limited to 2 - 3 standard errors .we may also see that couples walk at the closest distance , followed by friends , colleagues and then families . on the other hand ,families walk at the shortest abreast distance , although at a value basically equivalent to that of couples .the abreast distance of friends is significantly larger , and the one of colleagues assumes the greatest value .the `` depth '' assumes the smallest value in couples , followed by friends and workers , and the , by a large margin , highest value in families .these results may be completely understood only by analysing the probability distribution functions , which are shown in figures [ f2d ] , [ f2a ] , [ f2b ] and [ f2c ] for , respectively , , , and ( the statistical analysis of these distributions is reported in section [ overrel ] ) .observable in dyads with colleague ( blue , squares ) , couple ( red , circles ) , family ( orange , triangles ) and friend ( green , stars ) relation . ] observable in dyads with colleague ( blue , squares ) , couple ( red , circles ) , family ( orange , triangles ) and friend ( green , stars ) relation . ] observable in dyads with colleague ( blue , squares ) , couple ( red , circles ) , family ( orange , triangles ) and friend ( green , stars ) relation . ] observable in dyads with colleague ( blue , squares ) , couple ( red , circles ) , family ( orange , triangles ) and friend ( green , stars ) relation . ] the pdfs provide again an easy interpretation of the data .the distributions for friends , families and couples are quite similar , while the one for colleagues is clearly different ( displaced to higher values ) .this suggests that `` relation '' influences velocity in a limited way , with respect to `` purpose '' .the peaks of both and distributions assume the minimum value for couples , followed by families , friends and colleagues .the distributions for families present the following peculiar properties : the distribution has a fat tail ( causing the high average value ) , the distribution assumes large values for small , and the distribution is more spread ( on the other hand , distributions are very similar in the other categories ) . we may thus conclude that `` relation '' has an influence on distance , with couples walking at the closest distance , followed by families , friends and colleagues . at the same time , families walk in a less ordered formation ( less abreast ) .this behaviour is probably mainly due to children ( see also section [ ageef ] ) , and influences the results of the previous section ( `` leisure '' oriented dyads walking less abreast ) . in appendix [ furtherrel ]we further analyse these results , to understand the effect on them of age , gender , density and height , while in appendix [ coderrel ] we verify if the major findings are confirmed by all coders .the major trends exposed above are all confirmed by this further analysis . in particular , the tendency of families to have a wider distribution may be diminished but does not disappear when we keep fixed gender , age or height , showing that it is probably not only due to children .the results related to the relation dependence for all observables concerning the 1089 dyads whose gender was coded ( and that provided enough data to be analysed ) are shown in table [ table3a ] . & & + two females & 252 & 1102 12 ( =193 ) & 790 14 ( =227 ) & 647 7.8 ( =123 ) & 321 20 ( =311 ) + mixed & 371 & 1111 9.5 ( =183 ) & 824 14 ( =273 ) & 613 9 ( =174 ) & 416 18 ( =350 ) + two males & 466 & 1254 8.3 ( =178 ) & 846 11 ( =228 ) & 699 7.7 ( =166 ) & 349 14 ( =293 ) + & & 84.6 & 4.37 & 30.7 & 7.69 + & & & 0.0129 & & 0.000484 + & & 0.135 & 0.00798 & 0.0535 & 0.014 + & & 0.825 & 0.248 & 0.51 & 0.282 + although we may easily see that the differences between the distributions are statistically significant ( with stronger differences in the and distributions ) , it is again useful , in order to understand these results , to analyse the probability distribution functions , which are shown in figures [ f3d ] , [ f3a ] , [ f3b ] and [ f3c ] for , respectively , the , , and observables ( the statistical analysis of these distributions is reported in section [ overgen ] ) observable in dyads with two females ( red , circles ) , mixed ( orange , triangles ) and two males ( blue , squares ) . ]observable in dyads with two females ( red , circles ) , mixed ( orange , triangles ) and two males ( blue , squares ) . ]observable in dyads with two females ( red , circles ) , mixed ( orange , triangles ) and two males ( blue , squares ) . ]observable in dyads with two females ( red , circles ) , mixed ( orange , triangles ) and two males ( blue , squares ) . ]the difference between the females and males distributions is very clear , with both peaks and tails in the velocity and ( abreast or absolute ) distance distributions displaced to higher results . regarding the distribution, we may see that the male distribution is more spread than the female one , and thus females have a stronger tendency to walk abreast .the mixed dyads absolute and abreast distance distribution are characterised by low values for the peaks and fat tails , in particular for the absolute value distribution .the distribution presents relatively high values at low , and correspondingly the distribution is very spread ( tendency not to walk abreast ) .the mixed dyads velocity distribution is interestingly very similar to the female one. the peculiarity of the mixed distributions may be better understood by taking in consideration the other effects , in particular those related to relation , as shown in appendix [ furthergen ] .coder reliability is analysed in appendix [ codergen ] .to study the dependence of the , , and observable on age , we used three different approaches , namely to study how these observable change depending on _ average _ , _ maximum _ and _ minimum _ group age .the latter analysis appears to be the most interesting one , since it allows us to spot the presence of children , and we limit ourselves to it in the main text .results corresponding to the dependence on average and maximum age are found in appendix [ maxavage ] .table [ table4a ] and figures [ f4b ] , [ f4b2 ] show the minimum age dependence of all observables ( based on the analysis of 1089 dyads ) .although differences between distributions are statistically significant , both velocity and distance observables are mostly constant for groups whose minimum age is in the 20 - 60 years range .we nevertheless find that the group depth ( the observable characterising thus the tendency of pedestrians not to walk abreast ) assumes a very high value in groups with children , a minimum in the 20 - 29 years range , and then grows with age . on the other hand ,abreast distance is relatively low for groups with children ( as we will see below , grows with body size distance in elderly people , due to the shorter height of elderly people in the japanese population . ] ) .velocity is mostly constant below 60 years , but drops for elderly groups . & & + 0 - 9 years & 31 & 1143 42 ( =235 ) & 995 69 ( =383 ) & 529 34 ( =189 ) & 701 87 ( =485 ) + 10 - 19 years & 63 & 1158 33 ( =259 ) & 791 33 ( =259 ) & 624 19 ( =148 ) & 359 40 ( =320 ) + 20 - 29 years & 364 & 1181 9.1 ( =173 ) & 793 11 ( =218 ) & 668 8.1 ( =154 ) & 307 14 ( =264 ) + 30 - 39 years & 292 & 1204 12 ( =202 ) & 836 14 ( =238 ) & 673 10 ( =176 ) & 364 18 ( =307 ) + 40 - 49 years & 149 & 1181 14 ( =176 ) & 841 18 ( =224 ) & 664 13 ( =158 ) & 384 26 ( =311 ) + 50 - 59 years & 111 & 1164 18 ( =193 ) & 825 21 ( =219 ) & 649 15 ( =160 ) & 378 30 ( =318 ) + 60 - 69 years & 67 & 1028 21 ( =170 ) & 881 41 ( =335 ) & 638 20 ( =164 ) & 468 52 ( =422 ) + 70 years & 12 & 886 29 ( =99.8 ) & 786 79 ( =275 ) & 588 19 ( =66.6 ) & 385 100 ( =363 ) + & & 10.7 & 3.96 & 4.23 & 8.02 + & & & 0.000282 & 0.000128 & + & & 0.065 & 0.025 & 0.0267 & 0.0494 + & & 1.6 & 0.583 & 0.808 & 1.37 + dependence on minimum age .dashed lines provide standard error confidence intervals .the point at 75 years corresponds to the `` 70 years or more '' slot . ] , and dependence on minimum age .black circles : ; red squares : ; blue triangles : .dashed lines provide standard error confidence intervals .the point at 75 years corresponds to the `` 70 years or more '' slot . ]the probability functions for different observables in different age ranges are shown in figures [ f4d ] , [ f4e ] , [ f4f ] and [ f4 g ] respectively for observables , , and , and their statistical analysis is presented in section [ overage ] .we may easily see from the large tail of the distribution , the high values for the distribution , the spread of the distribution , that the presence of a child causes the group not to walk very abreast .the abreast distance peak is higher in `` working age people''with respect to young and elderly dyads .elderly people have a very narrow peak in the distribution , but also a fat tail .velocity in the 0 - 19 age range assumes lower peaks than in the 20 - 59 range , but has a large spread , while in elderly people it assumes clearly lower values , the tracking of short people ( and thus of children ) is more difficult , and thus the tracked position could be affected by higher sensor noise , although our time filter ( see again section [ traj ] ) should remove this problem .we thus examined a portion of the videos corresponding to group with children , and noticed that children have indeed an erratic behaviour that leads them to sudden accelerations and non - abreast formations .we thus believe that the large spread of observables for dyads with children is due to actual pedestrian behaviour . ] .( minimum ) age in the 0 - 9 years range : green and circles ; in the 10 - 19 range : red and diamonds , in the 30 - 39 range : orange and squares ; in the 50 - 59 range : blue , and triangles ; in the over 70 range : black and stars . ] .( minimum ) age in the 0 - 9 years range : green and circles ; in the 10 - 19 range : red and diamonds , in the 30 - 39 range : orange and squares ; in the 50 - 59 range : blue , and triangles ; in the over 70 range : black and stars . ] .( minimum ) age in the 0 - 9 years range : green and circles ; in the 10 - 19 range : red and diamonds , in the 30 - 39 range : orange and squares ; in the 50 - 59 range : blue , and triangles ; in the over 70 range : black and stars . ] .( minimum ) age in the 0 - 9 years range : green and circles ; in the 10 - 19 range : red and diamonds , in the 30 - 39 range : orange and squares ; in the 50 - 59 range : blue , and triangles ; in the over 70 range : black and stars . ] in appendix [ furtherage ] we analyse possible effects due to density , relation , gender and age .interesting results reported in the appendix suggest a tendency of families _ not _ to walk abreast even when formed only by adults , and differences in groups with children based on gender ( probably affected by the gender of the parent ) .coder reliability is analysed in appendix [ coderage ] .a further interesting result is that , as shown in figure [ fastchild ] ( based on the analysis of appendix [ accounting ] ) , dyads with children _ walk faster at higher density _ , in contrast with the usual pedestrian behaviour . on minimum age at different densities .* blue squares : ped / m range ; red circles : ped / m range .the point at 75 years corresponds to the `` 70 years or more '' slot .dashed lines show standard error confidence intervals . ]height is the only pedestrian feature that is not the result of coding , since it is automatically tracked by our system . we again considered ( see appendix [ heicomp ] ) average , minimum and maximum height .the three indicators give similar results , and in the following we use minimum height to better identify the presence of children .the dependence of all observables on minimum height ( based on 1089 dyads ) is shown in table [ table_h_min ] .we have significant statistical difference for all observables , but the interpretation of the results is not straightforward , due to the peculiar behaviour of dyads including short people ( most probably children ) . as shown in figure [ f5d ] velocity grows ( as expected , see for example and ) , with height , but dyads with a very short individual represent an exception ( children move fast despite the short height ) . in figure [ f5d2 ]we may see that distance is mostly independent of height above 150 cm , but assumes a very high value for dyads including short pedestrians .figure [ f5b ] shows the height dependence of , which results to be a decreasing function , although a comparison with a linear fit shows that dyads including people under 140 cm walk with a particularly spread ( non abreast ) distribution , while above 150 cm the group depth is almost constant .the observable , on the other hand , appears to grow mostly in a linear way ( figure [ f5b2 ] ) .this could lead us to think that abreast distance depends only on body size .nevertheless , while there is probably a strong dependence of abreast distance on height , this linear dependence is also due to the balance between the non - linear male and female behaviour , as shown in figure [ fastchild2 ] , based on the analysis of appendix [ accounting ] .furthermore , as we will see below when studying the distribution probability functions of the observable , the growth in with height is a combination of a increase of peak position and decrease of people walking in non abreast formation ( figure [ f5f ] ) . & & + 140 cm & 39 & 1130 34 ( =211 ) & 1004 65 ( =404 ) & 573 34 ( =210 ) & 672 80 ( =501 ) + 140 - 150 cm & 39 & 1106 50 ( =311 ) & 875 46 ( =289 ) & 619 25 ( =156 ) & 469 64 ( =403 ) + 150 - 160 cm & 234 & 1104 13 ( =197 ) & 797 16 ( =246 ) & 631 8.9 ( =136 ) & 360 21 ( =328 ) + 160 - 170 cm & 498 & 1169 8.1 ( =182 ) & 821 11 ( =243 ) & 657 7.7 ( =172 ) & 362 14 ( =311 ) + 170 - 180 cm & 262 & 1242 11 ( =173 ) & 827 12 ( =197 ) & 699 9.6 ( =155 ) & 321 16 ( =251 ) + 180 cm & 17 & 1232 51 ( =211 ) & 793 48 ( =198 ) & 689 33 ( =135 ) & 270 53 ( =217 ) + & & 14.5 & 5.25 & 7.45 & 9.69 + & & & 9.03 & 6.9 & + & & 0.0626 & 0.0237 & 0.0333 & 0.0428 + & & 0.744 & 0.591 & 0.773 & 0.922 + dependence on minimum height .data points shown by red circles . continuous black line : linear best fit with , =715 mm / s , =2.81 s .dashed lines provide standard error confidence intervals , the point at 135 cm corresponds to the `` less than 140 cm '' slot , the one at 185 cm to the `` more than 180 cm '' slot . ]dependence on minimum height .data points shown by red circles . continuous black line : linear best fit with , =1390 mm , =-3.35 . dashed lines provide standard error confidence intervals , the point at 135 cm corresponds to the `` less than 140 cm '' slot , the one at 185 cm to the `` more than 180 cm '' slot . ]dependence on minimum height .data points shown by red circles . continuous black line : linear best fit with , =1530 mm , =-7.01 .dashed lines provide standard error confidence intervals , the point at 135 cm corresponds to the `` less than 140 cm '' slot , the one at 185 cm to the `` more than 180 cm '' slot . ]dependence on minimum height .data points shown by red circles . continuous black line : linear best fit with , =258 mm , =2.42 .dashed lines provide standard error confidence intervals , the point at 135 cm corresponds to the `` less than 140 cm '' slot , the one at 185 cm to the `` more than 180 cm '' slot . ]dependence on minimum height for different genders .red circles : two females ; blue squares : two males ( continuous lines : linear fits , =154 mm , =2.98 ; =211 mm , =2.79 ) .the points at 135 and 185 cm represent the `` less than 140 '' and `` more than 180 '' cm slots ) .dashed lines show confidence intervals . ] the probability functions for different observables in different minimum height ranges are shown in figures [ f5h ] [ f5e ] , [ f5f ] and [ f5 g ] , respectively for the , , and observables , and their statistical analysis is shown in section [ overh ] .we see that the abreast distance distributions are displaced to the right with growing height , with a corresponding decrease in the values assumed around zero ( particularly high in the 0 - 140 cm distribution , probably due to children behaviour ) . similarly , the distribution becomes narrower with growing height , and presents a very different behaviour in the shortest height slot .the absolute distance distributions are displaced to the right with growing height , but the very fat tail for the 0 - 140 cm distribution causes the average value to have a more complex dependence on height .the distribution shows a clear displacement to the right with growing height , both in peaks and tails , although the 0 - 140 cm distribution has again a peculiar behaviour due to its very pronounced width . .( minimum ) height in the 0 - 140 cm range : green circles ; in the 140 - 150 range : red diamonds ; in the 150 - 160 range : orange squares ; in the 160 - 170 range : blue triangles ; in the 170 - 180 range : black stars . ] .( minimum ) height in the 0 - 140 cm range : green circles ; in the 140 - 150 range : red diamonds ; in the 150 - 160 range : orange squares ; in the 160 - 170 range : blue triangles ; in the 170 - 180 range : black stars . ] .( minimum ) height in the 0 - 140 cm range : green circles ; in the 140 - 150 range : red diamonds ; in the 150 - 160 range : orange squares ; in the 160 - 170 range : blue triangles ; in the 170 - 180 range : black stars . ] .( minimum ) height in the 0 - 140 cm range : green circles ; in the 140 - 150 range : red diamonds ; in the 150 - 160 range : orange squares ; in the 160 - 170 range : blue triangles ; in the 170 - 180 range : black stars . ] in appendix [ furtherhei ] we analyse the validity of these results on height dependence when we consider other effects such as age , relation , gender and density , and verify that , although sometimes diminished , height related results are present also when analysing groups with fixed age , relation and gender .by analysing how pedestrian dyad behaviour depends on the group s `` intrinsic properties '' , namely the characteristics of its members and the relation between them , we observed that females dyads are slower and walk closer than males , that workers walk faster , at a larger distance and more abreast than leisure oriented people , and that inter - group relation has a strong effect on group structure , with couples walking very close and abreast , colleagues walking at a larger distance , and friends walking more abreast than family members .pedestrian height influences velocity and abreast distance , observables that grow with the average or minimum group height .we also found that elderly people walk slowly , while active age adults walk at the maximum velocity .dyads with children have a strong tendency to walk in a non abreast formation , with a large distance but a shorter abreast distance . in the supplementary materials appendices, we analysed how these features affect each other , and we verified that the effects of the different features are present , even though sometimes diminished , even when the other features are kept fixed ( e.g. , when we compare colleagues of different gender , and the like ) .the cross - analysis of the interplay between these features revealed also a richer structure .interesting results are , for example , that the velocity of dyads with children appears to increase with density ( at least in the low - medium density range ) , and that children behaviour appears to be influenced by the gender of the parent . in this work we focused on `` groupfeatures '' more than `` individual features '' , i.e. we did not explicitly address questions such as the age or height difference , and similar .we may nevertheless infer from our results some information about how group members with different height , age and gender `` compromise '' on group dynamics .we may see , for example , in appendix [ heicomp ] that average age gives , for the and observables that are growing function of height , a result in between those obtained for minimum and maximum height .height is a physical and not social feature , and it appears that , after averaging over all social features , the chosen velocity and abreast distance are the averages of those preferred by the individuals .gender and age appear , on the other hand , to have a deeper impact on social interactions . in mixed groups ,males appear to adapt to female velocity when we average over relations , but when we analyse for secondary effects in appendix [ accounting ] , we see that this is true for couples and families , but it does not apply to friends or colleagues . similarly , while couples walk closer than male or female same sex dyads , mixed colleague groups walk farther than same sex dyads of both genders . in a similar way , due also to the peculiar behaviour of children , it is impossible to find a simple `` compromise '' rule for age related behaviour .more information could be inferred by an analysis taking in explicit account age differences , that we reserve for a future work .the exact figures found in this work may depend strongly on the environment in which they have been recorded , and vary not only with density , but also with other macroscopic crowd dynamics features ( uni - directional flow , bi - directional flow , multi - directional flow , presence or not of standing pedestrians , etc . ) as well as architectural features of the environment ( open space or large corridor or narrow corridor , etc . ) .for this reason , attempts to verify our findings in different environments should be directed not at specific quantitative figures ( e.g. male dyads walk at 1.25 m / s and females at 1.1 meters per second ) but at qualitative patterns ( e.g. males walk faster than females in a statistically significant way , with a difference in velocity comparable to the standard deviation in distributions ) .it would be in particular very interesting to compare our findings with different cultural settings , since it may be expected that social group behaviour is strongly dependent on culture , so that at least some of the patterns could change when similar data collection experiments are performed outside of ( western ) japan .a possible extension of this work regard the analysis of three people group behaviour .furthermore , as stated above , in this work we limited ourselves to group properties and not individual properties ( e.g. , we verified if a group was mixed , but we did not study the specific position of the male or female ) . after a revision of the coding procedure , we could analyse if , according to gender , age or height differences , roles such as `` leader '' or `` follower '' emerge .finally , a mathematical modelling following and could be performed . besides the obvious applications to pedestrian simulations , with possible influence in building and events planning , disaster prevention , and even in entertainment industries such as movies and video games , we are particularly interested in applications in the field of robotics and more in general slow vehicles with automatic navigation capabilities deployed in pedestrian facilities , such as delivery vehicles or automatic wheelchairs and carts .such vehicles will arguably become more common in the future , and in order to navigate safely inside human crowds , and to move together with other humans `` as in a group '' , they will need and understanding of pedestrian and group behaviour . more specifically , a `` companion '' robot or an automatic wheelchair will need 1 . to be able to recognise pedestrian groups , using an automatic recognition algorithm 2 . to be able to predict their behaviour , both in order to be able to safely avoid them and to perform a socially acceptable behaviour 3 .to be able to move together with other humans , and behave as a member of a group for all these applications it is extremely important to understand deeply how pedestrians actually behave and we plan to use these findings to improve our previous algorithms and systems as part of the development of a platform for autonomous personal mobility systems .this research is partly supported by the ministry of internal affairs and communications ( mic ) , japan , _ research and development project on autonomous personal mobility including robots _ , by crest , jst and jsps kakenhi grand number 16j40223 .in this work we are interested in describing how pedestrian group behaviour is influenced by some _ intrinsic features _ , such as purpose , relation , gender , age or height .each feature ( or factor ) may be divided in categories ( e.g. , in the case of relation and the categories are colleagues , couples , family and friends ) .each group is coded as belonging to a specific category , so that each category has groups . as described in section [ traj ] , for each group we can measure the value of observable every 500 ms .we may call these measurements with ( i.e. we have measurements , or events , corresponding to group in category ) .we believe that the largest amount of quantitative information regarding the dependence of group behaviour on intrinsic features is included in the overall probability distributions functions concerning all measurements of a given observable , as shown for example in figure [ f1d ] , since from the analysis of these figures we can understand what is the probability of having a given value for each observable in each category .it is nevertheless useful to extract some quantitative information , such as average values and standard deviations , from these distributions .furthermore , although the purpose of this paper is not to provide a `` value statistical independence label '' to each feature , to compare such average values it is customary and useful to compute , along with other statistical indicators such as effect size and determination coefficient , the standard error of each distribution and to perform the related analysis of variance ( anova ) .the computation of these latter statistical quantities is nevertheless based on an assumption of statistical independence of the data , an assumption that clearly does not hold for all our observations we were following a single group ( ) for one hour ( ) .we will have then , if we ignore measurement noise , a perfect information regarding the behaviour of that group in that hour and , under the strong assumption of time independence in the group behaviour , a good statistics about the behaviour _ of that particular group_. we still do not have any information about how group behaviour changes between groups in the category , since that information depends on the number of groups analysed , .furthermore , since in general we track a given group only for the few seconds it needs to cross the corridor , the observations at fixed are also strongly time correlated . ] .we thus proceed in the following way , justified by having a similar number of observation for each group . for each observable compute the average over group and then provide its average value in the category as where and the standard error are given by and the standard deviation is this rule of thumb is obviously related to the anova analysis reported in the text .the anova analysis proceeds as follows .we define as the number of categories for a given feature , as the total number of groups , and the overall average of the observable as we then define the distance between and as and the degrees of freedom the factor is then defined as this result is reported in our tables as , along with the celebrated value , that provides the probability , under the hypothesis of independence of data , that the difference between the distributions is due to chance the distribution has to be computed numerically , but a value assures a small value .let us see how this relates to the rule of thumb for standard errors .let us assume we have two categories with the same number of groups for category we clearly have and using but the numbers shown in the tables use the approximate definition . for or more ,as it is usually the case in this work , the difference is at most 5% . ] we get the expression so that the rule of thumb eq .[ thumb ] corresponds to have an high value and thus a low value .eq [ effe ] says that the factor is high if the are smaller than the , i.e. if the variation inside the categories are smaller than outside the category , and if the total number of observation is high .due to the large number of data points , the values in appendix [ overstat ] ( where we use all the observable measurement instead of group averages ) are always very high , and the corresponding values very low , but the hypothesis of statistical independence of data underlying the usual interpretation of is obviously not valid .there are nevertheless some statistical estimators that do not depend dramatically on the number of observations , and that will thus have a similar value either if performed using all the data points or if performed using only group averages .one such estimator is the coefficient of determination which can also be computed as from the factor as and provides an estimate of how much of the variance in the data is `` explained '' by the category averages . the coefficient may attain low values if two or more category distribution functions are very similar , as it usually the case in our work . to point out the presence of at least one distribution that is clearly different from the others we may use the following definition of the effect size .we first define where , are the number of points used for computing the averages and standard deviations if we are using group averages , if we are using overall distributions . ] , and then we consider the maximum pairwise effect size while a value tells us about the significance of the statistical difference between two distributions , the difference may be often so small that if can be verified only if a large amount of data are collected .but if we have also , then the two distributions are different enough to be distinguished also using a relatively reduced amount of data .we refrain from applying the machinery of two way or way anova to our data , since our ecological data set is extremely unbalanced , and it is unbalanced for the very reason that our `` factors '' are not independent variables .it is nevertheless useful to analyse the interplay between the different features , and we do that in section [ accounting ] by performing a statistical analysis similar to the one described above of a given feature while keeping fixed the value of another feature to a category . , we use also the external feature of pedestrian crowd density . since the same group may contribute to different densities , when operating at a fixed density we use for group averages all groups that contribute with at least 5 data points ( instead of the usual 10 ) to the observable distribution for that density value . ]sometimes this analysis is performed on a reduced number of groups , and thus the corresponding value may be high .this does not imply that the analysis is valueless , at least in our opinion , since it provides new information .the and values are , in this situation , useful to compare different observables on the given condition . as an example ,table [ table2c1 ] tells us that has a stronger variation between relation categories for fixed gender than , and so on .furthermore , in these situations , an analysis of statistical indicators that do not depend critically on the number of observations , such as the effect size , is particularly valuable .table [ overt1 ] provides a statistical analysis of the overall probability distributions for the purpose categories . & & + leisure & 38501 & 1096 1.3 ( =251 ) & 799 1.6 ( =309 ) & 630 1.2 ( =236 ) & 360 2 ( =388 ) + work & 18936 & 1257 1.7 ( =235 ) & 834 2.1 ( =287 ) & 714 1.6 ( =227 ) & 315 2.5 ( =341 ) + & & 5400 & 169 & 1640 & 184 + & & & & & + & & 0.0859 & 0.00293 & 0.0278 & 0.00319 + & & 0.652 & 0.115 & 0.36 & 0.12 + table [ overt2 ] provides a statistical analysis of the overall probability distributions for the relation categories . & & + colleagues & 18172 & 1262 1.7 ( =234 ) & 840 2.2 ( =290 ) & 720 1.7 ( =229 ) & 317 2.6 ( =344 ) + couples & 5273 & 1085 3.2 ( =231 ) & 699 3.7 ( =271 ) & 584 2.6 ( =188 ) & 290 4.4 ( =318 ) + families & 12596 & 1072 2.2 ( =246 ) & 834 3.2 ( =357 ) & 592 2.3 ( =260 ) & 452 4 ( =447 ) + friends & 17634 & 1113 2 ( =260 ) & 788 2 ( =265 ) & 659 1.6 ( =214 ) & 312 2.5 ( =338 ) + & & 1940 & 362 & 975 & 485 + & & & & & + & & 0.0978 & 0.0198 & 0.0517 & 0.0264 + & & 0.795 & 0.493 & 0.614 & 0.392 + table [ overt3 ] provides a statistical analysis of the overall probability distributions for the relation categories . & & + two females & 14688 & 1075 2.1 ( =251 ) & 773 2.2 ( =268 ) & 647 1.7 ( =202 ) & 302 2.9 ( =346 ) + mixed & 19311 & 1098 1.7 ( =239 ) & 803 2.4 ( =334 ) & 614 1.8 ( =248 ) & 388 3 ( =411 ) + two males & 23516 & 1237 1.6 ( =249 ) & 839 1.9 ( =292 ) & 702 1.6 ( =239 ) & 337 2.3 ( =355 ) + & & 2570 & 225 & 791 & 232 + & & & & & + & & 0.0822 & 0.00778 & 0.0268 &0.008 + & & 0.647 & 0.233 & 0.365 & 0.224 + table [ overt4 ] provides a statistical analysis of the overall probability distributions for the minimum age ranges . & & + 0 - 9 years & 1041 & 1127 8.4 ( =272 ) & 983 15 ( =480 ) & 573 9.5 ( =306 ) & 663 18 ( =580 ) + 10 - 19 years & 3443 & 1110 5.2 ( =303 ) & 767 5.1 ( =298 ) & 626 3.8 ( =222 ) & 322 6.2 ( =364 ) + 20 - 29 years & 18679 & 1167 1.8 ( =240 ) & 788 2.1 ( =289 ) & 665 1.6 ( =223 ) & 301 2.6 ( =349 ) + 30 - 39 years & 15552 & 1179 2.1 ( =264 ) & 816 2.4 ( =294 ) & 667 2 ( =248 ) & 343 2.9 ( =357 ) + 40 - 49 years & 7974 & 1167 2.7 ( =242 ) & 838 3.3 ( =296 ) & 668 2.7 ( =243 ) & 374 4.2 ( =378 ) + 50 - 59 years & 6025 & 1153 3.3 ( =253 ) & 812 3.7 ( =284 ) & 653 2.9 ( =223 ) & 358 4.7 ( =367 ) + 60 - 69 years & 3969 & 1001 3.5 ( =219 ) & 836 5.4 ( =340 ) & 643 3.8 ( =242 ) & 409 6.7 ( =419 ) + 70 years & 832 & 877 6 ( =172 ) & 793 13 ( =363 ) & 599 7.8 ( =224 ) & 383 16 ( =453 ) + & & 400 & 89.1 & 46.7 & 175 + & & & & & + & & 0.0464 & 0.0107 & 0.00566 & 0.0208 + & & 1.16 & 0.619 & 0.382 & 0.991 + table [ overt5 ] provides a statistical analysis of the overall probability distributions for the minimum height ranges . & & + 140 cm & 1579 & 1127 6.9 ( =274 ) & 942 11 ( =457 ) & 605 7.6 ( =300 ) & 578 14 ( =553 ) + 140 - 150 cm & 2206 & 1032 6.7 ( =315 ) & 855 8 ( =374 ) & 644 5.3 ( =248 ) & 420 10 ( =468 ) + 150 - 160 cm & 13064 & 1076 2.2 ( =251 ) & 779 2.5 ( =281 ) & 628 1.8 ( =209 ) & 337 3.2 ( =365 ) + 160 - 170 cm & 26345 & 1151 1.5 ( =245 ) & 810 1.9 ( =306 ) & 655 1.5 ( =243 ) & 348 2.3 ( =374 ) + 170 - 180 cm & 13497 & 1234 2.1 ( =243 ) & 819 2.3 ( =269 ) & 700 2 ( =232 ) & 309 2.8 ( =323 ) + 180 cm & 824 & 1224 9.3 ( =268 ) & 823 11 ( =325 ) & 686 8.1 ( =234 ) & 309 14 ( =404 ) + & & 648 & 102 & 149 & 171 + & & & & & + & & 0.0533 & 0.00875 & 0.0128 & 0.0146 + & & 0.796 & 0.534 & 0.398 & 0.532 +work - oriented dyads are more frequently found during working days , in which the environment presents a lower density ( and thus higher velocity and inter - group pedestrian distance , ) .it is thus important to analyse the results of section [ purpose ] when they are divided for density ranges , for example by comparing results in the pedestrian per square meter range with those in the range .the results are reported in table [ table1b1 ] and [ table1b2 ] , showing that the differences in and remain significant regardless of density .the difference in becomes significant at high density , while at very low density is not significant ( while the opposite happens to ) . & & + leisure & 426 & 1113 10 ( =215 ) & 851 15 ( =303 ) & 658 9 ( =186 ) & 400 18 ( =373 ) + work & 209 & 1274 12 ( =169 ) & 924 20 ( =296 ) & 741 14 ( =203 ) & 409 26 ( =373 ) + & & 89.6 & 8.17 & 25.7 & 0.0807 + & & & 0.0044 & 5.36 & 0.776 + & & 0.124 & 0.0127 & 0.039 &0.000128 + & & 0.8 & 0.242 & 0.428 & 0.024 + & & + leisure & 145 & 1084 14 ( =170 ) & 764 17 ( =209 ) & 560 13 ( =158 ) & 390 26 ( =308 ) + work & 22 & 1229 27 ( =125 ) & 754 26 ( =123 ) & 673 25 ( =117 ) & 237 40 ( =186 ) + & & 14.7 & 0.0513 & 10.2 & 5.05 + & & 0.000182 & 0.821 & 0.00167 & 0.026 + & & 0.0817 & 0.000311 & 0.0583 & 0.0297 + & & 0.881 & 0.0521 & 0.735 & 0.516 + in table [ table1b3 ] and [ table1b4 ] we report , respectively , and values for purpose corresponding to each observable and density range , showing that the , and distributions are different in a statistically significant way at different density ranges , although the effect on grows with density .differences in are significant only at the lowest density range . & + 0 - 0.05 ped / m & & 0.0044 & 5.36 & 0.776 + 0.05 - 0.1 ped / m & & 0.682 & & 0.000517 + 0.1 - 0.15 ped / m & & 0.221 & & 1.32 + 0.15 - 0.2 ped / m & 0.000182 & 0.821 & 0.00167 & 0.026 + & + 0 - 0.05 ped / m & 0.8 & 0.242 & 0.428 & 0.024 + 0.05 - 0.1 ped / m & 0.914 & 0.0292 & 0.515 & 0.248 + 0.1 - 0.15 ped / m & 0.812 & 0.117 & 0.627 & 0.467 + 0.15 - 0.2ped / m & 0.881 & 0.0521 & 0.735 & 0.516 + the work and leisure populations are strongly biased regarding gender . in tables[ table1c1 ] , [ table1c2 ] and [ table1c3 ] we show the results for the work and leisure observables when limited to , respectively , female , mixed and male dyads . while velocity is still significantly different also when gender is fixed , absolute distance in men , and all distance observables in females are not significantly different .we may thus conclude that differences between workers and leisure oriented people are present regardless of gender , but are magnified by the gender difference in the two populations . & & + leisure & 222 & 1092 13 ( =194 ) & 794 16 ( =235 ) & 644 8.4 ( =125 ) & 328 22 ( =322 ) + work & 29 & 1184 30 ( =162 ) & 755 28 ( =150 ) & 663 19 ( =101 ) & 274 38 ( =204 ) + & & 5.97 & 0.733 & 0.615 & 0.777 + & & 0.0153 & 0.393 & 0.433 & 0.379 + & & 0.0234 & 0.00293 & 0.00247 & 0.00311 + & & 0.484 & 0.17 & 0.155 & 0.175 + & & + leisure & 330 & 1097 9.9 ( =180 ) & 814 15 ( =267 ) & 602 9.6 ( =174 ) & 415 19 ( =341 ) + work & 41 & 1226 26 ( =167 ) & 902 48 ( =308 ) & 698 24 ( =152 ) & 420 65 ( =419 ) + & & 18.9 & 3.79 & 11.3 & 0.00662 + & & 1.77 & 0.0524 & 0.000849 & 0.935 + & & 0.0488 & 0.0102 & 0.0298 & 1.79 + & & 0.722 & 0.323 & 0.558 & 0.0135 + & & + leisure & 164 & 1196 16 ( =207 ) & 846 19 ( =246 ) & 660 14 ( =173 ) & 392 25 ( =325 ) + work & 302 & 1285 8.7 ( =152 ) & 846 13 ( =218 ) & 720 9 ( =157 ) & 325 16 ( =271 ) + & & 28.1 & 0.000251 & 14.4 & 5.5 + & & 1.83 & 0.987 & 0.000165 & 0.0195 + & & 0.057 & 5.42 & 0.0301 & 0.0117 + & & 0.515 & 0.00154 & 0.369 & 0.228 + in tables ( [ table1d1 ] ) and ( [ table1d2 ] ) we show the results for the work and leisure observables when limited to groups of a given average age .the results suggest that differences may be present at any age ( in particular concerning ) , but are definitely more strong for more mature walkers . & & + leisure & 292 & 1164 10 ( =177 ) & 798 14 ( =242 ) & 656 9.7 ( =165 ) & 326 17 ( =290 ) + work & 78 & 1242 19 ( =166 ) & 775 17 ( =152 ) & 684 12 ( =108 ) & 266 22 ( =197 ) + & & 12.2 & 0.608 & 1.91 & 2.93 + & & 0.000536 & 0.436 & 0.168 & 0.088 + & & 0.0321 & 0.00165 & 0.00515 & 0.00789 + & & 0.446 & 0.0996 & 0.176 & 0.219 + & & + leisure & 61 & 1053 21 ( =164 ) & 808 32 ( =247 ) & 601 20 ( =155 ) & 404 46 ( =356 ) + work & 53 & 1276 21 ( =153 ) & 845 24 ( =173 ) & 706 20 ( =144 ) & 345 36 ( =261 ) + & & 54.8 & 0.808 & 13.7 & 0.966 + & & & 0.371 & 0.000328 & 0.328 + & & 0.329 & 0.00716 & 0.109 & 0.00855 + & & 1.4 & 0.17 & 0.702 & 0.186 + in table [ table1d3 ] and [ table1d4 ] we report , respectively , and values for purpose corresponding to each observable and average age range , showing again that differences have a tendency to grow with age . & + 20 - 29 years & 0.000536 & 0.436 & 0.168 & 0.088 + 30 - 39 years & & 0.0689 & 6.34 & 0.144 + 40 - 49 years & & 0.12 & 4.78 & 0.264 + 50 - 59 years & & 0.371 & 0.000328 & 0.328 + 60 - 69 years & 0.0233 & 0.221 & 0.48 & 0.463 + & + 20 - 29 years & 0.446 & 0.0996 & 0.176 & 0.219 + 30 - 39 years & 0.994 & 0.224 & 0.682 & 0.18 + 40 - 49 years & 0.97 & 0.226 & 0.68 & 0.162 + 50 - 59 years & 1.4 & 0.17 & 0.702 & 0.186 + 60 - 69 years & 1.21 & 0.649 & 0.373 & 0.389 + in tables ( [ table1e1 ] ) and ( [ table1e2 ] ) we show the results for the work and leisure observables when limited to groups of a given average height , and in tables [ table1e3 ] and [ table1e4 ] we report , respectively , and values for purpose corresponding to each observable and average height range .differences appear to grow with height , probably affected also by the gender distributions . & & + leisure & 108 & 1107 25 ( =260 ) & 821 26 ( =268 ) & 629 15 ( =153 ) & 389 34 ( =352 ) + work & 10 & 1152 51 ( =160 ) & 709 27 ( =86 ) & 631 28 ( =88.5 ) & 264 32 ( =101 ) + & & 0.283 & 1.71 & 0.00249 & 1.24 + & & 0.596 & 0.194 & 0.96 & 0.268 + & & 0.00243 & 0.0145 & 2.15 & 0.0106 + & & 0.177 & 0.434 & 0.0166 & 0.37 + & & + leisure & 188 & 1138 12 ( =168 ) & 801 15 ( =212 ) & 628 12 ( =159 ) & 366 21 ( =291 ) + work & 233 & 1291 10 ( =157 ) & 850 15 ( =230 ) & 730 10 ( =153 ) & 322 18 ( =273 ) + & & 92.2 & 5.22 & 44 & 2.53 + & & & 0.0228 & & 0.113 + & & 0.18 & 0.0123 & 0.0951 & 0.006 + & & 0.944 & 0.225 & 0.652 & 0.156 + & + 150 - 160 cm & 0.596 & 0.194 & 0.96 & 0.268 + 160 - 170 cm & & 0.0557 & 0.00126 & 0.953 + 170 - 180 cm & & 0.0228 & & 0.113 + 180 cm & 0.773 & 0.959 & 0.289 & 0.522 + & + 150 - 160 cm & 0.177 & 0.434 & 0.0166 & 0.37 + 160 - 170 cm & 0.696 & 0.217 & 0.368 & 0.00667 + 170 - 180 cm & 0.944 & 0.225 & 0.652 & 0.156 + 180 cm & 0.0998 & 0.0177 & 0.368 & 0.22 + as discussed above , work - oriented ( and thus colleagues ) dyads are more present during working days , in which the environment presents a lower density. tables [ table2b1 ] and [ table2b2 ] show the observables dependence for fixed density ranges ( ped / m and ped / m , respectively ) .the major trends exposed in the main text are present at any density , as confirmed also by tables [ table2b3 ] and [ table2b4 ] , reporting and values , respectively , for all density ranges . & & + colleagues & 202 & 1276 12 ( =169 ) & 934 21 ( =298 ) & 751 15 ( =207 ) & 409 26 ( =376 ) + couples & 62 & 1103 25 ( =193 ) & 760 38 ( =297 ) & 600 22 ( =177 ) & 359 40 ( =314 ) + families & 125 & 1084 19 ( =208 ) & 894 30 ( =331 ) & 617 16 ( =175 ) & 512 37 ( =413 ) + friends & 193 & 1130 17 ( =230 ) & 830 19 ( =258 ) & 685 12 ( =162 ) & 338 23 ( =326 ) + & & 30.5 & 7.59 & 19 & 6.17 + & & & 5.52 & & 0.000396 + & & 0.137 & 0.0379 & 0.0896 & 0.031 + & & 1.03 & 0.585 & 0.754 & 0.482 + & & + colleagues & 22 & 1229 27 ( =125 ) & 754 26 ( =123 ) & 673 25 ( =117 ) & 237 40 ( =186 ) + couples & 19 & 1064 28 ( =124 ) & 663 31 ( =135 ) & 542 22 ( =97.1 ) & 290 36 ( =159 ) + families & 68 & 1068 22 ( =180 ) & 802 30 ( =247 ) & 532 21 ( =170 ) & 465 44 ( =362 ) + friends & 57 & 1107 22 ( =168 ) & 753 22 ( =164 ) & 603 20 ( =149 ) & 332 33 ( =251 ) + & & 5.54 & 2.59 & 5.87 & 4.77 + & & 0.0012 & 0.055 & 0.000794 & 0.00327 + & & 0.0931 & 0.0457 & 0.098 & 0.0811 + & & 1.32 & 0.613 & 0.888 & 0.694 + & + 0 - 0.05 ped / m & & 5.52 & & 0.000396 + 0.05 - 0.1 ped / m & & 0.000158 & & + 0.1 - 0.15 ped / m & & 2.22 & & + 0.15 - 0.2 ped / m & 0.0012 & 0.055 & 0.000794 & 0.00327 + 0.2 - 0.25 ped / m & 0.378 & 0.327 & 0.144 & 0.664 + & + 0 - 0.05 ped / m & 1.03 & 0.585 & 0.754 & 0.482 + 0.05 - 0.1ped / m & 1.07 & 0.476 & 0.732 & 0.538 + 0.1 - 0.15 ped / m & 1.16 & 0.666 & 0.857 & 0.761 + 0.15 - 0.2 ped / m & 1.32 & 0.613 & 0.888 & 0.694 + 0.2 - 0.25ped / m & 0.675 & 0.798 & 0.983 & 0.974 + we now compare the results regarding relation for groups of given gender ( two females , mixed and two males ) in tables [ table2c1 ] , [ table2c2 ] and [ table2c3 ] .differences in the distributions ( and the corresponding trends ) are still significant in fixed gender groups , with the exception of the female and male distributions , although , as shown by the relatively high values , this may be due due by the low amount of data in some categories . the patterns analysed in the main text are mostly respected ,although we may notice some differences such as female friends walking at an higher distance than colleagues , and two male families walking at a very high speed . & & + colleagues & 24 & 1167 30 ( =145 ) & 735 26 ( =128 ) & 664 20 ( =95.9 ) & 238 34 ( =168 ) + families & 28 & 1023 32 ( =171 ) & 847 58 ( =305 ) & 565 27 ( =140 ) & 488 77 ( =405 ) + friends & 184 & 1105 15 ( =197 ) & 777 15 ( =205 ) & 658 8.5 ( =115 ) & 293 20 ( =274 ) + & & 3.85 & 1.9 & 7.85 & 6.48 + & & 0.0227 & 0.153 & 0.000503 & 0.00182 + & & 0.032 & 0.016 & 0.0631 & 0.0527 + & & 0.902 & 0.465 & 0.817 & 0.783 + & & + colleagues & 35 & 1228 30 ( =175 ) & 923 55 ( =327 ) & 702 27 ( =158 ) & 440 75 ( =445 ) + couples & 96 & 1099 17 ( =169 ) & 714 22 ( =219 ) & 600 15 ( =150 ) & 291 24 ( =231 ) + families & 183 & 1078 13 ( =182 ) & 860 21 ( =285 ) & 588 13 ( =173 ) & 493 28 ( =372 ) + friends & 20 & 1153 41 ( =183 ) & 820 43 ( =192 ) & 616 43 ( =192 ) & 391 70 ( =311 ) + & & 7.47 & 7.99 & 4.63 & 7.26 + & & 7.49 & 3.72 & 0.00345 & 9.96 + & & 0.0636 & 0.0677 & 0.0404 & 0.0619+ & & 0.832 & 0.83 & 0.67 & 0.61 + & & + colleagues & 299 & 1287 8.8 ( =152 ) & 852 13 ( =220 ) & 724 9.3 ( =160 ) & 329 16 ( =273 ) + families & 35 & 1234 39 ( =229 ) & 891 63 ( =375 ) & 571 31 ( =182 ) & 537 79 ( =467 ) + friends & 114 & 1187 19 ( =198 ) & 811 17 ( =186 ) & 676 14 ( =147 ) & 335 23 ( =246 ) + & & 14.5 & 2.1 & 16.1 & 8.35 + & & 8.13 & 0.124 & 1.76 & 0.000276 + & & 0.0611 & 0.00933 & 0.0675 & 0.0362 + & & 0.607 & 0.329 & 0.94 & 0.698 + tables [ table2d1 ] and [ table2d2 ] show the observables dependence for fixed minimum age ranges ( 20 - 29 and 50 - 59 years , respectively ) .the major trends exposed in the main text are present even when the age is kept fixed .we may notice that assumes a very high value in families even when children are not present . and values for the relation feature at different minimum age ranges are shown in , respectively , tables [ table2d3 ] and [ table2d4 ] . & & + colleagues & 86 & 1255 18 ( =165 ) & 813 20 ( =185 ) & 706 16 ( =144 ) & 291 24 ( =219 ) + couples & 74 & 1115 19 ( =165 ) & 711 27 ( =229 ) & 600 18 ( =154 ) & 281 28 ( =243 ) + families & 23 & 1109 39 ( =187 ) & 877 58 ( =277 ) & 581 37 ( =177 ) & 527 78 ( =373 ) + friends & 164 & 1186 13 ( =164 ) & 801 16 ( =208 ) & 683 10 ( =128 ) & 298 21 ( =265 ) + & & 10.9 & 5.05 & 11.1 & 5.9 + & & 7.21 & 0.00195 & 5.89 & 0.00062 + & & 0.0872 & 0.0423 & 0.0883 & 0.0491 + & & 0.861 & 0.684 & 0.825 & 0.882 + & & + colleagues & 52 & 1274 22 ( =159 ) & 844 24 ( =172 ) & 700 20 ( =142 ) & 350 36 ( =263 ) + families & 28 & 1048 32 ( =169 ) & 846 55 ( =289 ) & 562 34 ( =182 ) & 492 78 ( =410 ) + friends & 22 & 1051 36 ( =167 ) & 759 44 ( =208 ) & 637 24 ( =115 ) & 308 59 ( =276 ) + & & 23.4 & 1.29 & 7.63 & 2.55 + & & & 0.28 & 0.000825 & 0.0835 + & & 0.321 & 0.0254 & 0.134 & 0.0489 + & & 1.39 & 0.339 & 0.878 & 0.514 + & + 10 - 19 years & 0.558 & 0.049 & 0.615 & 0.0313 + 20 - 29 years & 7.21 & 0.00195 & 5.89 & 0.00062 + 30 - 39 years & & 0.0128 & & 0.0513 + 40 - 49 years & 1.02 & 0.39 & 0.000266 & 0.537 + 50 - 59 years & & 0.28 & 0.000825 & 0.0835 + 60 - 69 years & 0.0525 & 0.248 & 0.745 & 0.388 + 70 years & 0.385 & 0.251 & 0.198 & 0.237 + & + 10 - 19 years & 0.888 & 3.23 & 0.609 & 2.49 + 20 - 29 years & 0.861 & 0.684 & 0.825 & 0.882 + 30 - 39 years & 1.48 & 0.77 & 1.05 & 0.636 + 40 - 49 years & 1.2 & 0.529 & 0.848 & 0.557 + 50 - 59 years & 1.39 & 0.339 & 0.878 & 0.514 + 60 - 69 years & 1.21 & 1.12 & 0.386 & 0.669 + 70 years & 0.612 & 0.844 & 0.937 & 0.865 + tables [ table2e1 ] and [ table2e2 ] show the observables dependence for fixed average height ranges ( 160 - 170 and 170 - 180 cm , respectively ) , showing that the distributions are still different in a significant way , and that the major patterns exposed in the main text are confirmed . tables [ table2e3 ] and [ table2e4 ] show respectively the and values for all height ranges . values are always high showing that some reduced values are due to the low number of groups in some ranges . & & + colleagues & 89 & 1240 16 ( =149 ) & 862 26 ( =249 ) & 686 18 ( =167 ) & 380 37 ( =350 ) + couples & 47 & 1106 28 ( =191 ) & 731 33 ( =226 ) & 622 28 ( =189 ) & 287 30 ( =204 ) + families & 121 & 1090 18 ( =196 ) & 854 27 ( =295 ) & 593 15 ( =169 ) & 487 34 ( =371 ) + friends & 172 & 1135 13 ( =169 ) & 798 17 ( =221 ) & 659 10 ( =131 ) & 321 23 ( =302 ) + & & 13.3 & 3.96 & 7.08 & 7.42 + & & 2.47 & 0.0084 & 0.000119 & 7.42 + & & 0.086 & 0.0272 & 0.0476 & 0.0498 + & & 0.843 & 0.542 & 0.551 & 0.599 + & & + colleagues & 231 & 1293 10 ( =157 ) & 859 15 ( =232 ) & 738 10 ( =157 ) & 325 18 ( =274 ) + couples & 45 & 1089 22 ( =145 ) & 700 33 ( =219 ) & 576 14 ( =95.4 ) & 300 39 ( =264 ) + families & 56 & 1107 22 ( =166 ) & 818 31 ( =234 ) & 557 20 ( =148 ) & 462 48 ( =361 ) + friends & 71 & 1162 20 ( =166 ) & 811 19 ( =156 ) & 679 17 ( =145 ) & 328 26 ( =215 ) + & & 38.9 & 6.77 & 31.5 & 4.13 + & & & 0.000183 & & 0.00672 + & & 0.226 & 0.0485 & 0.192 &0.0301 + & & 1.32 & 0.692 & 1.16 & 0.503 + & + 140 cm & 0.362 & 0.108 & 0.61 & 0.0849 + 140 - 150 cm & 0.12 & 0.181 & 0.785 & 0.299 + 150 - 160 cm & 0.842 & 0.133 & 0.803 & 0.0402 + 160 - 170 cm & 2.47 & 0.0084 & 0.000119 & 7.42 + 170 - 180 cm & & 0.000183 & & 0.00672 + 180 cm & 0.00432 & 0.551 & 0.126 &0.951 + & + 140 cm & 0.767 & 1.41 & 0.423 & 1.53 + 140 - 150 cm & 0.977 & 0.798 & 0.159 & 0.612 + 150 - 160 cm & 0.405 & 0.658 & 0.521 & 0.594 + 160 - 170 cm & 0.843 & 0.542 & 0.551 & 0.599 + 170 - 180cm & 1.32 & 0.692 & 1.16 & 0.503 + 180 cm & 2.78 & 0.972 & 1.41 & 0.331 + tables [ table3c1 ] and [ table3c2 ] show the dependence on gender of observables at fixed density ranges ( ped / m and ped / m , respectively ) .we may see that only the observable loses the statistically significant gender dependence at high density ( but still shows it at lower density , when pedestrians may move more freely ; furthermore , the effect size is almost not affected by density ) , while the other observables preserve it at any density .tables [ table3c3 ] and [ table3c4 ] show the dependence of , respectively , the gender and values at different density values . & & + two females & 160 & 1095 17 ( =219 ) & 818 21 ( =267 ) & 669 11 ( =138 ) & 337 27 ( =346 ) + mixed & 217 & 1112 13 ( =196 ) & 870 23 ( =340 ) & 642 13 ( =194 ) & 448 28 ( =409 ) + two males & 259 & 1254 12 ( =196 ) & 914 18 ( =283 ) & 733 13 ( =217 ) & 404 22 ( =351 ) + & & 41.5 & 5.06 & 14.1 & 4.11 + & & & 0.00658 & 1.04 & 0.0169 + & & 0.116 & 0.0157 & 0.0426 & 0.0128 + & & 0.771 & 0.346 & 0.441 & 0.289 + & & + two females & 35 & 1073 28 ( =164 ) & 714 26 ( =152 ) & 572 18 ( =107 ) & 318 39 ( =230 ) + mixed & 73 & 1062 18 ( =152 ) & 782 29 ( =247 ) & 521 20 ( =172 ) & 448 42 ( =361 ) + two males & 59 & 1171 23 ( =178 ) & 767 19 ( =147 ) & 644 18 ( =136 ) & 304 28 ( =218 ) + & & 7.81 & 1.4 & 11.2 & 4.6 + & & 0.000578 & 0.249 & 2.88 & 0.0114 + & & 0.0869 & 0.0168 & 0.12 & 0.0531 + & & 0.665 & 0.308 & 0.786 & 0.471 + & + 0 - 0.05 ped / m & & 0.00658 & 1.04 & 0.0169 + 0.05 - 0.1 ped / m & & 0.0448 & & 0.00164 + 0.1 - 0.15 ped / m & & 0.897 & 9.41 & 0.00478 + 0.15 - 0.2 ped / m & 0.000578 & 0.249 & 2.88 & 0.0114 + 0.2 - 0.25 ped / m & 0.0304 & 0.0628 & 0.31 & 0.43 + & + 0 - 0.05 ped / m & 0.771 & 0.346 & 0.441 & 0.289 + 0.05 - 0.1ped / m & 0.771 & 0.235 & 0.537 & 0.291 + 0.1 - 0.15 ped / m & 0.737 & 0.0554 & 0.582 & 0.35 + 0.15 - 0.2 ped / m & 0.665 & 0.308 & 0.786 & 0.471 + 0.2 - 0.25 ped / m & 1.31 & 1.56 & 0.942 & 0.751 + tables [ table3b1 ] , [ table3b2 ] and [ table3b3 ] show the gender dependence of observables in , respectively , colleagues , families and friends ( couples are not shown being exclusively of mixed gender ) . & & + two females & 24 & 1167 30 ( =145 ) & 735 26 ( =128 ) & 664 20 ( =95.9 ) & 238 34 ( =168 ) + mixed & 35 & 1228 30 ( =175 ) & 923 55 ( =327 ) & 702 27 ( =158 ) & 440 75 ( =445 ) + two males & 299 & 1287 8.8 ( =152 ) & 852 13 ( =220 ) & 724 9.3 ( =160 ) & 329 16 ( =273 ) + & & 8.49 & 4.82 & 1.78 & 3.7 + & & 0.00025 & 0.00862 & 0.17 & 0.0256 + & & 0.0457 & 0.0264 & 0.00995 & 0.0204 + & & 0.798 & 0.709 & 0.38 & 0.561 + & & + two females & 28 & 1023 32 ( =171 ) & 847 58 ( =305 ) & 565 27 ( =140 ) & 488 77 ( =405 ) + mixed & 183 & 1078 13 ( =182 ) & 860 21 ( =285 ) & 588 13 ( =173 ) & 493 28 ( =372 ) + two males & 35 & 1234 39 ( =229 ) & 891 63 ( =375 ) & 571 31 ( =182 ) & 537 79 ( =467 ) + & & 12.3 & 0.197 & 0.308 & 0.198 + & & 8.41 & 0.821 & 0.735 & 0.821 + & & 0.0917 & 0.00162 & 0.00253 & 0.00163 + & & 1.03 & 0.128 & 0.135 & 0.112 + & & + two females & 184 & 1105 15 ( =197 ) & 777 15 ( =205 ) & 658 8.5 ( =115 ) & 293 20 ( =274 ) + mixed & 20 & 1153 41 ( =183 ) & 820 43 ( =192 ) & 616 43 ( =192 ) & 391 70 ( =311 ) + two males & 114 & 1187 19 ( =198 ) & 811 17 ( =186 ) & 676 14 ( =147 ) & 335 23 ( =246 ) + & & 6.02 & 1.27 & 1.91 & 1.76 + & & 0.00272 & 0.283 & 0.15 & 0.173 + & & 0.0368 & 0.00798 & 0.012 & 0.0111 + & & 0.412 & 0.213 & 0.388 & 0.354 + we may see that males dyads are farther and faster than female ones regardless of relation ( although the differences in , and are quite reduced in families and friends ) .mixed dyad behaviour , on the other hand , depends strongly on relation .mixed dyads are the only ones including couples , and this affects strongly their behaviour , and represent also the largest part of families .they are very little represented in friends ( interestingly , mixed dyads of friends walk much closer , in abreast distance , than same sex dyads , although their absolute distance is higher than in females ) .the `` colleagues '' category could represent a fair field for comparing the effect of gender , and in it the mixed behaviour is somehow in between the two sexes ( although the absolute distance and group depth are very large , suggesting not abreast formations ) but in our set colleagues are extremely biased towards males , and thus the analysis in hindered by low female and mixed dyads numbers .finally we may notice that in families and friends , the effect of gender on distance ( , and ) is very reduced , but the one on velocity is persistent .the velocity effect size in families is nevertheless more than two times the one for friends .tables [ table3d1 ] and [ table3d2 ] show the dependence on gender of observables at fixed average age ranges ( 20 - 29 years and 50 - 59 years , respectively ) .interestingly , the differences between young two females and two males dyads are reduced ( and almost absent regarding the distance observables , and ) , while they are very strong in elder groups .young mixed dyad behaviour is strongly influenced by the presence of couples . & & + two females & 111 & 1166 16 ( =170 ) & 791 21 ( =220 ) & 686 10 ( =110 ) & 275 26 ( =271 ) + mixed & 125 & 1122 16 ( =175 ) & 784 23 ( =255 ) & 612 16 ( =182 ) & 360 27 ( =307 ) + two males & 134 & 1247 14 ( =164 ) & 803 17 ( =201 ) & 689 13 ( =148 ) & 301 20 ( =235 ) + & & 18 & 0.235 & 10.5 & 3.03 + & & 3.37 & 0.791 & 3.77 & 0.0496 + & & 0.0895 & 0.00128 & 0.054 & 0.0162 + & & 0.739 & 0.0832 & 0.47 & 0.291 + & & + two females & 20 & 1010 30 ( =136 ) & 708 21 ( =95.5 ) & 613 27 ( =121 ) & 254 34 ( =151 ) + mixed & 34 & 1071 29 ( =170 ) & 856 48 ( =278 ) & 608 32 ( =189 ) & 462 66 ( =388 ) + two males & 60 & 1255 22 ( =168 ) & 847 25 ( =192 ) & 686 18 ( =141 ) & 369 38 ( =298 ) + & & 22.8 & 3.69 & 3.37 & 2.81 + & & & 0.0281 & 0.0379 & 0.0643 + & & 0.291 & 0.0623 & 0.0573 & 0.0482 + & & 1.52 & 0.646 & 0.486 & 0.646 + tables [ table3d3 ] and [ table3d4 ] show the and values for gender in different average age ranges .minimum ages ranges are shown in tables [ table3d5 ] and [ table3d6 ] . & + 10 - 19 years & 0.0301 & 0.685 & 0.573 & 0.903 + 20 - 29 years & 3.37 & 0.791 & 3.77 & 0.0496 + 30 - 39 years & & 0.0477 & 7.66 & 0.0433 + 40 - 49 years & & 0.106 & 0.000167 & 0.856 + 50 - 59 years & & 0.0281 & 0.0379 & 0.0643 + 60 - 69 years & 0.00145 & 0.495 & 0.17 & 0.655 + 70 years & 0.245 & 0.564 & 0.543 & 0.598 + & + 10 - 19 years & 0.769 & 0.241 & 0.309 & 0.14 + 20 - 29 years & 0.739 & 0.0832 & 0.47 & 0.291 + 30 - 39 years & 0.87 & 0.48 & 0.732 & 0.322 + 40 - 49 years & 1.3 & 0.329 & 0.619 & 0.0946 + 50 - 59 years & 1.52 & 0.646 & 0.486 & 0.646 + 60 - 69 years & 1.49 & 0.457 & 0.666 & 0.27 + 70 years & 1.56 & 0.802 & 0.915 & 0.788 + & + 0 - 9 years & 0.0872 & 0.17 & 0.577 & 0.198 + 10 - 19 years & 0.00563 & 0.497 & 0.981 & 0.484 + 20 - 29 years & 1.67 & 0.665 & 8.91 & 0.0654 + 30 - 39 years & & 0.0904 & 1.99 & 0.027 + 40 - 49 years & 3.02 & 0.193 & 0.000602 & 0.778 + 50 - 59 years & 3.78 & 0.0458 & 0.0555 & 0.0743 + 60 - 69 years & 0.00245 & 0.446 & 0.105 & 0.651 + 70 years & 0.245 & 0.564 & 0.543 & 0.598 + & + 0 - 9 years & 1.11 & 1.21 & 0.449 & 1.17 + 10 - 19 years & 0.949 & 0.473 & 0.0487 & 0.421 + 20 - 29 years & 0.715 & 0.111 & 0.475 & 0.271 + 30 - 39 years & 0.907 & 0.396 & 0.749 & 0.306 + 40 - 49 years & 1.62 & 0.343 & 0.65 & 0.113 + 50 - 59 years & 1.47 & 0.616 & 0.487 & 0.659 + 60 - 69 years & 1.45 & 0.541 & 0.79 & 0.275 + 70 years & 1.56 & 0.802 & 0.915 &0.788 + tables [ table3e1 ] and [ table3e2 ] show the dependence on gender of observables at fixed average height ranges ( 150 - 160 and 170 - 180 cm , respectively ) . the results ( in particular for the shorter height range the effect size, that helps in dealing with the reduced number of groups ) show that differences between the sexes are still present when we consider individuals of similar height . & & + two females & 75 & 1094 27 ( =232 ) & 791 26 ( =225 ) & 627 15 ( =131 ) & 352 36 ( =316 ) + mixed & 25 & 1045 35 ( =176 ) & 796 43 ( =217 ) & 603 33 ( =167 ) & 376 60 ( =300 ) + two males & 18 & 1272 82 ( =346 ) & 921 92 ( =390 ) & 674 42 ( =180 ) & 493 110 ( =447 ) + & & 4.95 & 1.89 & 1.22 & 1.24 + & & 0.00869 & 0.156 & 0.3 & 0.294 + & & 0.0792 & 0.0318 & 0.0207 & 0.0211 + & & 0.873 & 0.493 & 0.415 & 0.408 + & & + two females & 16 & 1091 48 ( =191 ) & 741 30 ( =119 ) & 662 22 ( =88.5 ) & 238 32 ( =128 ) + mixed & 121 & 1127 16 ( =171 ) & 797 23 ( =250 ) & 598 14 ( =149 ) & 396 31 ( =338 ) + two males & 284 & 1270 9.6 ( =161 ) & 846 13 ( =213 ) & 723 9.4 ( =159 ) & 324 15 ( =258 ) + & & 36.7 & 3.4 & 27.8 & 3.89 + & & & 0.0344 & & 0.0212 + & & 0.149 & 0.016 & 0.117 & 0.0183 + & & 1.1 & 0.502 & 0.799 & 0.492 + tables [ table3e3 ] and [ table3e4 ] show , respectively , the gender and values for different average height ranges . & + 140 cm & 0.614 & 0.0596 & 0.958 & 0.148 + 140 - 150 cm & 0.000737 & 0.372 & 0.0226 & 0.306 + 150 - 160 cm & 0.00869 & 0.156 & 0.3 & 0.294 + 160 - 170 cm & 1.0 & 0.0653 & 0.0212 & 0.000455 + 170 - 180 cm & & 0.0344 & & 0.0212 + 180 cm & 0.0241 & 0.191 & 0.137 &0.647 + & + 140 cm & 0.774 & 1.89 & 0.183 & 1.39 + 140 - 150 cm & 2.06 & 0.693 & 1.47 & 0.868 + 150 - 160 cm & 0.873 & 0.493 & 0.415 & 0.408 + 160 - 170 cm & 0.523 & 0.279 & 0.235 & 0.416 + 170 - 180 cm & 1.1 & 0.502 & 0.799 &0.492 + 180 cm & 1.28 & 0.708 & 0.811 & 0.246 + tables [ table4fur_dl ] and [ table4fur_dh ] show the age dependence of observables in , respectively , the ped / m and ped / m density ranges .results mostly reflect those of the main text , with high or relatively high values suggesting that some not very good values may be due to the scarcity of data in the children and elderly categories ( i.e. the categories with the most different behaviour ) .a remarkable feature , presented with the caveats related to sensor noise in the tracking of children , is that while in general velocity decreases with density , this is not true for dyads with children , as shown in figure [ fastchild ] in the main text . & & + 0 - 9 years & 9 & 1075 68 ( =205 ) & 1078 97 ( =291 ) & 663 62 ( =186 ) & 704 140 ( =414 ) + 10 - 19 years & 44 & 1175 43 ( =288 ) & 802 40 ( =262 ) & 661 25 ( =167 ) & 337 44 ( =294 ) + 20 - 29 years & 184 & 1198 14 ( =196 ) & 853 23 ( =313 ) & 694 14 ( =193 ) & 357 27 ( =372 ) + 30 - 39 years & 185 & 1196 16 ( =217 ) & 894 22 ( =306 ) & 696 16 ( =223 ) & 418 27 ( =368 ) + 40 - 49 years & 87 & 1150 20 ( =191 ) & 909 32 ( =297 ) & 683 22 ( =202 ) & 440 42 ( =395 ) + 50 - 59 years & 71 & 1157 23 ( =198 ) & 844 27 ( =228 ) & 678 18 ( =149 ) & 381 38 ( =320 ) + 60 - 69 years & 47 & 1022 25 ( =174 ) & 912 51 ( =348 ) & 670 27 ( =182 ) & 481 62 ( =424 ) + 70 years & 9 & 891 31 ( =92.6 ) & 815 100 ( =307 ) & 605 19 ( =55.8 ) & 411 130 ( =394 ) + & & 6.98 & 1.61 & 0.527 & 1.94 + & & 4.97 & 0.129 & 0.815 & 0.0613 + & & 0.0722 & 0.0176 & 0.00584 & 0.0211 + & & 1.59 & 1.03 & 0.419 & 1.16 + & & + 0 - 9 years & 6 & 1284 110 ( =258 ) & 693 51 ( =126 ) & 485 47 ( =116 ) & 401 92 ( =225 ) + 10 - 19 years & 14 & 1146 47 ( =176 ) & 806 65 ( =244 ) & 571 39 ( =147 ) & 426 91 ( =341 ) + 20 - 29 years & 72 & 1099 16 ( =133 ) & 745 20 ( =167 ) & 598 17 ( =145 ) & 322 30 ( =255 ) + 30 - 39 years & 39 & 1102 31 ( =192 ) & 766 33 ( =208 ) & 575 24 ( =149 ) & 372 50 ( =313 ) + 40 - 49 years & 17 & 1121 32 ( =131 ) & 763 37 ( =152 ) & 547 42 ( =172 ) & 403 68 ( =279 ) + 50 - 59 years & 10 & 1057 53 ( =167 ) & 739 60 ( =190 ) & 485 58 ( =184 ) & 416 110 ( =343 ) + 60 - 69 years & 7 & 1021 75 ( =199 ) & 967 130 ( =354 ) & 616 84 ( =222 ) & 618 160 ( =423 ) + 70 years & 2 & 760 18 ( =25.4 ) & 644 22 ( =31 ) & 585 8.9 ( =12.5 ) & 185 52 ( =73.5 ) + & & 2.76 & 1.46 & 1.11 & 1.21 + & & 0.00993 & 0.184 & 0.359 & 0.299 + & & 0.108 & 0.0606 & 0.0466 & 0.0506 + & & 2.22 & 0.987 & 0.652 & 1.1 + tables [ table4fur_d1 ] and [ table4fur_d2 ] show , respectively , the minimum age and values in different density ranges & + 0 - 0.05 ped / m & 4.97 & 0.129 & 0.815 & 0.0613 + 0.05 - 0.1 ped / m & & 0.00286 & 0.0232 & 1.26 + 0.1 - 0.15ped / m & 8.51 & 0.0207 & 0.00346 & 7.22 + 0.15 - 0.2 ped / m & 0.00993 & 0.184 & 0.359 & 0.299 + 0.2 - 0.25 ped / m & 0.651 & 0.504 & 0.118 & 0.328 + & + 0 - 0.05 ped / m & 1.59 & 1.03 & 0.419 & 1.16 + 0.05 - 0.1 ped / m & 1.6 & 0.941 & 0.605 & 1.36 + 0.1 - 0.15 ped / m & 2.25 & 0.689 & 0.93 & 0.924 + 0.15 - 0.2 ped / m & 2.22 & 0.987 & 0.652 & 1.1 + 0.2 - 0.25 ped / m & 0.513 & 2.14 & 1.15 & 1.14 + tables [ table4fur_col ] , [ table4fur_coup ] , [ table4fur_fam ] and [ table4fur_fri ] show the age dependence of observables in , respectively , colleagues , couples , families and friends .observable values almost have no age dependence in the 20 - 59 years age ( with the exclusion of friend velocity ) .it is interesting to note that the distribution assumes a larger value for families even when only adults are involved .another interesting , although expectable , result is that while dyads with teenagers are very abreast in the friends category , they are not abreast in the family one ( the values is almost doubled in families ) . & & + 20 - 29 years & 86 & 1255 18 ( =165 ) & 813 20 ( =185 ) & 706 16 ( =144 ) & 291 24 ( =219 ) + 30 - 39 years & 145 & 1293 14 ( =167 ) & 861 21 ( =257 ) & 734 14 ( =165 ) & 331 25 ( =301 ) + 40 - 49 years & 71 & 1258 14 ( =119 ) & 870 28 ( =236 ) & 714 19 ( =161 ) & 363 38 ( =324 ) + 50 - 59 years & 52 & 1274 22 ( =159 ) & 844 24 ( =172 ) & 700 20 ( =142 ) & 350 36 ( =263 ) + 60 - 69 years & 4 & 1217 36 ( =72 ) & 1075 220 ( =433 ) & 692 100 ( =208 ) & 617 320 ( =632 ) + & & 1.17 & 1.72 & 0.702 & 1.62 + & & 0.326 & 0.146 & 0.591 & 0.168 + & & 0.013 & 0.0191 & 0.00789 & 0.018 + & & 0.461 & 1.32 & 0.252 & 1.33 + & & + 10 - 19 years & 2 & 958 180 ( =253 ) & 919 53 ( =74.7 ) & 725 58 ( =81.7 ) & 480 180 ( =257 ) + 20 - 29 years & 74 & 1115 19 ( =165 ) & 711 27 ( =229 ) & 600 18 ( =154 ) & 281 28 ( =243 ) + 30 - 39 years & 17 & 1049 37 ( =151 ) & 670 35 ( =143 ) & 572 32 ( =130 ) & 274 30 ( =124 ) + 40 - 49 years & 3 & 1091 110 ( =187 ) & 897 110 ( =198 ) & 684 66 ( =115 ) & 501 130 ( =223 ) + & & 1.2 & 1.53 & 0.966 & 1.36 + & & 0.315 & 0.211 & 0.412 & 0.261 + & & 0.0376 & 0.0476 & 0.0306 & 0.0424 + & & 0.946 & 1.78 & 1.19 & 1.65 + & & + 0 - 9 years & 31 & 1143 42 ( =235 ) & 995 69 ( =383 ) & 529 34 ( =189 ) & 701 87 ( =485 ) + 10 - 19 years & 36 & 1163 38 ( =230 ) & 831 49 ( =296 ) & 617 30 ( =179 ) & 415 58 ( =347 ) + 20 - 29 years & 23 & 1109 39 ( =187 ) & 877 58 ( =277 ) & 581 37 ( =177 ) & 527 78 ( =373 ) + 30 - 39 years & 46 & 1078 23 ( =159 ) & 814 33 ( =225 ) & 561 24 ( =163 ) & 458 49 ( =330 ) + 40 - 49 years & 41 & 1116 31 ( =199 ) & 801 28 ( =181 ) & 582 23 ( =149 ) & 431 40 ( =256 ) + 50 - 59 years & 28 & 1048 32 ( =169 ) & 846 55 ( =289 ) & 562 34 ( =182 ) & 492 78 ( =410 ) + 60 - 69 years & 37 & 1030 24 ( =145 ) & 911 63 ( =382 ) & 642 25 ( =154 ) & 512 75 ( =456 ) + 70 years & 4 & 847 52 ( =104 ) & 926 190 ( =384 ) & 550 38 ( =75.6 ) & 575 240 ( =477 ) + & & 2.83 & 1.52 & 1.46 & 1.74 + & & 0.00758 & 0.162 & 0.182 & 0.101 + & & 0.0767 & 0.0427 & 0.0412 & 0.0486 + & & 1.42 & 0.679 & 0.659 & 0.686 + & & + 10 - 19 years & 23 & 1143 61 ( =292 ) & 681 15 ( =73.9 ) & 621 16 ( =78.5 ) & 217 19 ( =93.2 ) + 20 - 29 years & 164 & 1186 13 ( =164 ) & 801 16 ( =208 ) & 683 10 ( =128 ) & 298 21 ( =265 ) + 30 - 39 years & 56 & 1143 28 ( =206 ) & 817 24 ( =178 ) & 644 22 ( =162 ) & 366 38 ( =286 ) + 40 - 49 years & 19 & 1089 47 ( =206 ) & 819 49 ( =213 ) & 682 21 ( =92.3 ) & 341 68 ( =295 ) + 50 - 59 years & 22 & 1051 36 ( =167 ) & 759 44 ( =208 ) & 637 24 ( =115 ) & 308 59 ( =276 ) + 60 - 69 years & 26 & 996 38 ( =192 ) & 808 40 ( =202 ) & 625 33 ( =169 ) & 383 59 ( =299 ) + 70 years & 8 & 906 32 ( =91.4 ) & 716 56 ( =159 ) & 606 18 ( =52.2 ) & 290 85 ( =239 ) + & & 7.17 & 1.82 & 1.98 & 1.29 + & & 3.58 & 0.0947 & 0.0678 & 0.262 + & & 0.121 & 0.0339 & 0.0368 & 0.0242 + & & 1.73 & 0.904 & 0.608 & 0.731 + tables [ table4fur_0o ] , [ table4fur_1o ] and [ table4fur_2o ] show the age dependence of observables in , respectively , dyads with two females , mixed dyads and two males .the results are similar to those shown in the main text .although based on an extremely reduced number of data , it is interesting to note the large difference in velocity between two males and two females dyads with children ( mixed dyads show a value in between , probably due to the fact that they include male and female parents ) , and the very large ( non - abreast formation ) value assumed in two females dyads ( mixed dyads on the opposite are more abreast ) .this values are based on very few groups , but differences are nevertheless larger than standard errors , and could reflect differences in the relation that children have with fathers and mothers ( at least in the observed cultural environment ) . & & + 0 - 9 years & 6 & 985 88 ( =215 ) & 1252 150 ( =378 ) & 525 78 ( =192 ) & 993 220 ( =535 ) + 10 - 19 years & 20 & 1075 58 ( =258 ) & 738 48 ( =216 ) & 621 23 ( =103 ) & 291 64 ( =288 ) + 20 - 29 years & 110 & 1169 16 ( =167 ) & 789 21 ( =221 ) & 684 10 ( =107 ) & 273 26 ( =277 ) + 30 - 39 years & 55 & 1108 25 ( =188 ) & 777 23 ( =171 ) & 639 16 ( =118 ) & 330 33 ( =246 ) + 40 - 49 years & 24 & 1040 33 ( =163 ) & 827 54 ( =266 ) & 622 25 ( =123 ) & 404 77 ( =379 ) + 50 - 59 years & 17 & 1015 35 ( =143 ) & 704 24 ( =97.3 ) & 623 29 ( =120 ) & 240 32 ( =133 ) + 60 - 69 years & 17 & 923 31 ( =130 ) & 791 52 ( =213 ) & 580 36 ( =149 ) & 390 85 ( =349 ) + 70 years & 3 & 958 14 ( =23.6 ) & 629 31 ( =53.5 ) & 587 40 ( =69 ) & 186 22 ( =37.4 ) + & & 6.27 & 4.83 & 3.74 & 5.64 + & & 8.87 & 4.06 & 0.000721 & 4.77 + & & 0.153 & 0.122 & 0.0969 & 0.139 + & & 1.51 & 1.94 & 1.42 & 1.78 + & & + 0 - 9 years & 12 & 1119 56 ( =193 ) & 888 75 ( =259 ) & 573 61 ( =212 ) & 547 83 ( =287 ) + 10 - 19 years & 16 & 1060 45 ( =181 ) & 840 54 ( =214 ) & 620 36 ( =145 ) & 417 78 ( =313 ) + 20 - 29 years & 120 & 1123 16 ( =175 ) & 782 23 ( =251 ) & 619 17 ( =182 ) & 352 27 ( =301 ) + 30 - 39 years & 93 & 1143 19 ( =180 ) & 834 30 ( =286 ) & 601 19 ( =179 ) & 435 40 ( =386 ) + 40 - 49 years & 53 & 1141 26 ( =191 ) & 802 25 ( =178 ) & 614 21 ( =150 ) & 400 34 ( =245 ) + 50 - 59 years & 34 & 1078 29 ( =168 ) & 848 47 ( =277 ) & 604 32 ( =188 ) & 455 66 ( =387 ) + 60 - 69 years & 38 & 1042 26 ( =160 ) & 905 61 ( =378 ) & 642 25 ( =152 ) & 506 73 ( =451 ) + 70 years & 5 & 831 44 ( =99 ) & 868 160 ( =363 ) & 563 33 ( =72.8 ) & 484 210 ( =463 ) + & & 3.64 & 1.11 & 0.387 & 1.32 + & & 0.000822 & 0.358 & 0.91 & 0.24 + & & 0.0656 & 0.0209 & 0.0074 & 0.0248 + & & 1.76 & 0.431 & 0.536 & 0.652 + & & + 0 - 9 years & 13 & 1237 65 ( =233 ) & 975 120 ( =425 ) & 491 42 ( =153 ) & 709 150 ( =540 ) + 10 - 19 years & 27 & 1277 48 ( =252 ) & 803 58 ( =303 ) & 628 34 ( =175 ) & 376 65 ( =337 ) + 20 - 29 years & 134 & 1241 14 ( =156 ) & 806 16 ( =180 ) & 697 13 ( =148 ) & 296 18 ( =208 ) + 30 - 39 years & 144 & 1280 16 ( =190 ) & 860 18 ( =222 ) & 732 14 ( =173 ) & 331 22 ( =259 ) + 40 - 49 years & 72 & 1257 14 ( =122 ) & 875 27 ( =233 ) & 715 19 ( =159 ) & 365 39 ( =328 ) + 50 - 59 years & 60 & 1254 22 ( =168 ) & 846 25 ( =193 ) & 683 19 ( =144 ) & 373 38 ( =297 ) + 60 - 69 years & 12 & 1134 48 ( =167 ) & 930 89 ( =307 ) & 711 54 ( =189 ) & 462 120 ( =406 ) + 70 years & 4 & 902 48 ( =95.7 ) & 802 92 ( =183 ) & 619 19 ( =37.6 ) & 410 140 ( =289 ) + & & 3.77 & 1.82 & 5.09 & 4.07 + & & 0.000553 & 0.081 & 1.39 & 0.000239 + & & 0.0544 & 0.0271 & 0.0722 &0.0586 + & & 2.01 & 0.445 & 1.41 & 1.64 + tables [ table4fur_150 ] and [ table4fur_170 ] show the age dependence of observables in , respectively , the 150 - 160 cm and 170 - 180 cm minimum height ranges .these data , which respect the patterns highlighted in the main text , present a sufficient number of groups in each category and are thus reliable .in some situation , a noisy tracking of a child may cause to have a category with very poor and not reliable representation ( e.g. groups with children but with a tall minimum height ) causing some irregular behaviour in the and values of tables [ table4fur_h1 ] and [ table4fur_h2 ] . & & + 10 - 19 years & 21 & 1124 54 ( =246 ) & 783 47 ( =215 ) & 606 31 ( =143 ) & 352 73 ( =333 ) + 20 - 29 years & 75 & 1157 21 ( =184 ) & 800 27 ( =232 ) & 668 14 ( =122 ) & 311 35 ( =307 ) + 30 - 39 years & 48 & 1109 24 ( =168 ) & 804 32 ( =223 ) & 624 20 ( =139 ) & 392 44 ( =305 ) + 40 - 49 years & 32 & 1108 39 ( =218 ) & 828 37 ( =211 ) & 589 27 ( =151 ) & 452 58 ( =328 ) + 50 - 59 years & 19 & 1067 39 ( =171 ) & 757 34 ( =146 ) & 611 36 ( =156 ) & 334 52 ( =228 ) + 60 - 69 years & 33 & 1008 28 ( =163 ) & 808 65 ( =375 ) & 641 20 ( =112 ) & 365 75 ( =429 ) + 70 years & 5 & 883 52 ( =117 ) & 666 48 ( =108 ) & 538 30 ( =67.2 ) & 272 88 ( =197 ) + & & 3.65 & 0.427 & 2.13 & 0.849 + & & 0.00177 & 0.86 & 0.0506 & 0.533 + & & 0.0883 & 0.0112 & 0.0536 & 0.022 + & & 1.52 & 0.802 & 1.09 & 0.569 + & & + 20 - 29 years & 95 & 1209 16 ( =152 ) & 808 18 ( =173 ) & 688 16 ( =156 ) & 309 22 ( =219 ) + 30 - 39 years & 90 & 1269 21 ( =203 ) & 838 24 ( =225 ) & 729 16 ( =155 ) & 300 26 ( =246 ) + 40 - 49 years & 45 & 1265 18 ( =120 ) & 820 20 ( =137 ) & 720 20 ( =131 ) & 298 26 ( =176 ) + 50 - 59 years & 30 & 1241 33 ( =182 ) & 862 44 ( =238 ) & 635 27 ( =148 ) & 436 68 ( =371 ) + & & 2.21 & 0.709 & 3.36 & 2.61 + & & 0.0873 & 0.548 & 0.0194 & 0.0517 + & & 0.0253 & 0.00823 & 0.0379 & 0.0297 + & & 0.339 & 0.282 & 0.613 & 0.51 + & + 140 cm & 0.0333 & 0.137 & 0.0326 & 0.0184 + 140 - 150 cm & 0.0129 & 0.65 & 0.858 & 0.615 + 150 - 160 cm & 0.00177 & 0.86 & 0.0506 & 0.533 + 160 - 170 cm & 0.000643 & 0.00561 & 0.807 & 0.00456 + 170 - 180 cm & 0.0873 & 0.548 & 0.0194 & 0.0517 + 180 cm & 0.98 & 0.292 & 0.56 & 0.0386 + & + 140 cm & 0.829 & 0.566 & 0.83 & 0.919 + 140 - 150 cm & 7.42 & 1.27 & 0.946 & 1.69 + 150 - 160 cm & 1.52 & 0.802 & 1.09 & 0.569 + 160 - 170 cm & 1.83 & 1.09 & 0.926 & 0.818 + 170 - 180 cm & 0.339 & 0.282 & 0.613 & 0.51 + 180 cm & 0.181 & 1.2 & 1.53 & 1.26 + tables [ table5ca ] and [ table5cb ] show the dependence of observables on minimum height in the and ped / m ranges , respectively .the trends discussed in the main text are still present .we notice again a tendency of short people ( most probably children ) to walk faster at higher density . and values for minimum height at different densitiesare shown in tables [ table5cex1 ] and [ table5cex2 ] & & + 140 cm & 21 & 1138 63 ( =288 ) & 1034 80 ( =368 ) & 693 44 ( =201 ) & 616 92 ( =421 ) + 140 - 150 cm & 29 & 1067 57 ( =304 ) & 876 51 ( =275 ) & 671 40 ( =218 ) & 420 64 ( =346 ) + 150 - 160 cm & 148 & 1104 18 ( =224 ) & 837 25 ( =304 ) & 648 11 ( =128 ) & 395 32 ( =390 ) + 160 - 170 cm & 290 & 1162 11 ( =187 ) & 880 19 ( =318 ) & 688 13 ( =217 ) & 409 22 ( =379 ) + 170 - 180 cm & 141 & 1259 16 ( =188 ) & 878 21 ( =253 ) & 718 17 ( =198 ) & 364 28 ( =331 ) + 180 cm & 7 & 1242 69 ( =182 ) & 929 73 ( =194 ) & 793 45 ( =120 ) & 316 100 ( =270 ) + & & 9.97 & 1.72 & 2.32 & 1.8 + & & & 0.128 & 0.0422 & 0.11 + & & 0.0733 & 0.0135 & 0.0181 &0.0141 + & & 0.906 & 0.634 & 1.13 & 0.767 + & & + 140 cm & 8 & 1185 57 ( =162 ) & 872 150 ( =416 ) & 512 68 ( =193 ) & 555 190 ( =543 ) + 140 - 150 cm & 6 & 1166 130 ( =315 ) & 965 170 ( =409 ) & 604 73 ( =179 ) & 590 190 ( =457 ) + 150 - 160 cm & 39 & 1068 23 ( =146 ) & 754 28 ( =177 ) & 518 27 ( =169 ) & 408 52 ( =327 ) + 160 - 170 cm & 72 & 1093 20 ( =170 ) & 722 16 ( =136 ) & 586 14 ( =121 ) & 321 24 ( =203 ) + 170 - 180 cm & 42 & 1127 24 ( =158 ) & 792 26 ( =170 ) & 618 26 ( =171 ) & 352 44 ( =284 ) + & & 1.34 & 3.29 & 2.62 & 2.31 + & & 0.259 & 0.0127 & 0.0368 & 0.0597 + & & 0.0319 & 0.0751 & 0.0608 & 0.054 + & & 0.787 & 1.44 & 0.606 & 1.18 + & + 0 - 0.05 ped / m & & 0.128 & 0.0422 & 0.11 + 0.05 - 0.1 ped / m & & 0.000607 & 0.000112 & 4.09 + 0.1 - 0.15 ped / m & 1.84 & & 3.34 & + 0.15 - 0.2 ped / m & 0.259 & 0.0127 & 0.0368 & 0.0597 + 0.2 - 0.25 ped / m & 0.303 & 0.602 & 0.603 & 0.765 + & + 0 - 0.05 ped / m & 0.906 & 0.634 & 1.13 & 0.767 + 0.05 - 0.1ped / m & 1.17 & 0.664 & 0.63 & 0.973 + 0.1 - 0.15 ped / m & 0.856 & 1.32 & 1.12 & 1.29 + 0.15 - 0.2 ped / m & 0.787 & 1.44 & 0.606 & 1.18 + 0.2 - 0.25 ped / m & 0.886 & 0.578 & 0.54 & 0.422 + tables [ table5cc ] , [ table5cd ] , [ table5ce ] and [ table5cf ] show the dependence of observables on minimum height for colleagues , couples , families and friends , respectively .the dependence of observables on height appears to be attenuated when analysed for groups with a fixed relation ( and in particular for couples ) , as shown by the higher values , and , to a lesser extent , lower values . & & + 150 - 160 cm & 15 & 1135 36 ( =141 ) & 732 25 ( =98.2 ) & 652 22 ( =85.9 ) & 265 31 ( =120 ) + 160 - 170 cm & 159 & 1276 12 ( =157 ) & 874 21 ( =265 ) & 712 13 ( =168 ) & 369 28 ( =351 ) + 170 - 180 cm & 170 & 1288 12 ( =155 ) & 846 15 ( =202 ) & 731 12 ( =153 ) & 312 18 ( =236 ) + 180 cm & 13 & 1220 36 ( =128 ) & 789 59 ( =214 ) & 689 33 ( =120 ) & 263 67 ( =241 ) + & & 5.03 & 2.2 & 1.49 & 1.63 + & & 0.002 & 0.0881 & 0.217 &0.182 + & & 0.041 & 0.0183 & 0.0125 & 0.0137 + & & 0.996 & 0.556 & 0.53 & 0.307 + & & + 150 - 160 cm & 20 & 1060 43 ( =193 ) & 736 32 ( =143 ) & 631 25 ( =111 ) & 288 39 ( =175 ) + 160 - 170 cm & 60 & 1114 21 ( =160 ) & 716 33 ( =254 ) & 591 22 ( =171 ) & 305 34 ( =261 ) + 170 - 180 cm & 15 & 1092 42 ( =162 ) & 678 35 ( =137 ) & 592 24 ( =93.1 ) & 245 39 ( =151 ) + & & 0.773 & 0.296 & 0.528 & 0.408 + & & 0.464 & 0.745 & 0.591 & 0.666 + & & 0.0165 & 0.00639 & 0.0114 & 0.00878 + & & 0.321 & 0.414 & 0.249 & 0.249 + & & + 140 cm & 33 & 1117 38 ( =216 ) & 1062 72 ( =411 ) & 570 39 ( =222 ) & 746 89 ( =509 ) + 140 - 150 cm & 19 & 1122 62 ( =270 ) & 832 60 ( =262 ) & 636 32 ( =139 ) & 410 65 ( =285 ) + 150 - 160 cm & 77 & 1107 23 ( =204 ) & 835 34 ( =302 ) & 583 19 ( =163 ) & 466 44 ( =387 ) + 160 - 170 cm & 99 & 1080 17 ( =170 ) & 831 24 ( =241 ) & 580 17 ( =166 ) & 467 33 ( =332 ) + 170 - 180 cm & 17 & 1053 36 ( =149 ) & 833 65 ( =268 ) & 566 37 ( =153 ) & 458 98 ( =403 ) + & & 0.602 & 4.3 & 0.542 & 4.04 + & & 0.661 & 0.00224 & 0.705 & 0.00344 + & & 0.00993 & 0.0668 & 0.00896 & 0.0631 + & & 0.312 & 0.788 & 0.478 & 0.76 + & & + 140 cm & 4 & 1129 55 ( =109 ) & 611 20 ( =39.7 ) & 518 31 ( =61.8 ) & 246 68 ( =136 ) + 140 - 150 cm & 16 & 1115 88 ( =354 ) & 816 44 ( =175 ) & 628 43 ( =172 ) & 382 80 ( =319 ) + 150 - 160 cm & 101 & 1100 20 ( =202 ) & 770 21 ( =209 ) & 659 10 ( =104 ) & 287 27 ( =269 ) + 160 - 170 cm & 142 & 1138 15 ( =174 ) & 802 18 ( =211 ) & 665 12 ( =146 ) & 324 23 ( =276 ) + 170 - 180 cm & 53 & 1199 23 ( =168 ) & 806 19 ( =138 ) & 673 18 ( =132 ) & 323 32 ( =233 ) + 180 cm & 2 & 1606 23 ( =32.1 ) & 928 61 ( =86.6 ) & 805 120 ( =171 ) & 329 72 ( =102 ) + & & 4.13 & 1.27 & 1.68 & 0.515 + & & 0.0012 & 0.276 & 0.138 & 0.765 + & & 0.0621 & 0.02 & 0.0263 & 0.00819 + & & 2.52 & 5.74 & 2.83 & 0.461 + tables [ table5cg ] , [ table5ch ] and [ table5ci ] show the dependence of observables on minimum height for two females , mixed and two males dyads , respectively .as discussed in the main text and shown in figure [ fastchild2 ] in the main text , there is a loss of linearity in , but the patterns described in the main text are still present , although partially attenuated , when gender is kept fixed . & & + 140 cm & 7 & 956 74 ( =195 ) & 1186 150 ( =385 ) & 529 68 ( =180 ) & 935 190 ( =509 ) + 140 - 150 cm & 21 & 1022 57 ( =262 ) & 841 66 ( =300 ) & 591 36 ( =166 ) & 439 96 ( =442 ) + 150 - 160 cm & 114 & 1098 18 ( =197 ) & 780 20 ( =214 ) & 656 11 ( =112 ) & 306 26 ( =279 ) + 160 - 170 cm & 104 & 1131 16 ( =159 ) & 768 18 ( =185 ) & 658 11 ( =115 ) & 278 24 ( =245 ) + 170 - 180 cm & 6 & 1123 77 ( =188 ) & 706 34 ( =83.3 ) & 644 26 ( =64.8 ) & 213 59 ( =144 ) + & & 2.58 & 6.59 & 3.11 & 9.37 + & & 0.0381 & 4.66 & 0.0159 & 4.55 + & & 0.0401 & 0.0965 & 0.048 & 0.132 + & & 1.08 & 1.66 & 1.08 & 1.86 + & & + 140 cm & 14 & 1100 42 ( =159 ) & 947 82 ( =307 ) & 590 53 ( =199 ) & 593 100 ( =373 ) + 140 - 150 cm & 9 & 1107 66 ( =199 ) & 967 91 ( =273 ) & 664 42 ( =127 ) & 552 120 ( =346 ) + 150 - 160 cm & 99 & 1092 20 ( =195 ) & 829 29 ( =286 ) & 609 16 ( =160 ) & 429 38 ( =376 ) + 160 - 170 cm & 210 & 1128 12 ( =176 ) & 811 18 ( =262 ) & 618 13 ( =184 ) & 395 23 ( =328 ) + 170 - 180 cm & 37 & 1083 28 ( =172 ) & 806 44 ( =267 ) & 589 24 ( =143 ) & 404 61 ( =372 ) + 180 cm & 2 & 937 140 ( =204 ) & 687 3.2 ( =4.56 ) & 573 48 ( =67.8 ) & 253 55 ( =77.2 ) + & & 1.14 & 1.3 & 0.418 & 1.27 + & & 0.34 & 0.262 & 0.836 & 0.277 + & & 0.0153 & 0.0175 & 0.00569 & 0.0171 + & & 1.08 & 1.09 & 0.751 & 0.945 + & & + 140 cm & 18 & 1221 48 ( =203 ) & 977 110 ( =455 ) & 577 53 ( =226 ) & 631 130 ( =549 ) + 140 - 150 cm & 9 & 1301 140 ( =405 ) & 861 85 ( =255 ) & 640 47 ( =142 ) & 456 110 ( =343 ) + 150 - 160 cm & 21 & 1196 39 ( =180 ) & 739 36 ( =164 ) & 600 22 ( =103 ) & 320 57 ( =261 ) + 160 - 170 cm & 184 & 1238 13 ( =179 ) & 863 18 ( =241 ) & 700 13 ( =174 ) & 372 23 ( =315 ) + 170 - 180 cm & 219 & 1272 11 ( =156 ) & 834 12 ( =184 ) & 719 10 ( =151 ) & 310 15 ( =224 ) + 180 cm & 15 & 1271 46 ( =178 ) & 808 53 ( =207 ) & 705 35 ( =134 ) & 272 59 ( =229 ) + & & 1.46 & 2.56 & 4.51 & 5.03 + & & 0.202 & 0.0267 & 0.000499 & 0.00017 + & & 0.0156 & 0.0271 & 0.0468 & 0.0518 + & & 0.394 & 0.721 & 0.903 & 0.826 + tables[ table5 cm ] and [ table5cn ] show the dependence on minimum height of all observables for dyads with minimum age in the 20 - 29 and 50 - 59 year ranges , respectively , showing the effect of removing children from the population .finally , tables [ table5cex3 ] and [ table5cex4 ] show the dependence of , respectively , minimum height and values on minimum age ranges . & & + 140 - 150 cm & 2 & 1161 180 ( =254 ) & 748 62 ( =87.2 ) & 613 41 ( =58.5 ) & 398 54 ( =75.7 ) + 150 - 160 cm & 75 & 1157 21 ( =184 ) & 800 27 ( =232 ) & 668 14 ( =122 ) & 311 35 ( =307 ) + 160 - 170 cm & 188 & 1175 13 ( =176 ) & 781 17 ( =231 ) & 658 12 ( =165 ) & 300 19 ( =265 ) + 170 - 180 cm & 95 & 1209 16 ( =152 ) & 808 18 ( =173 ) & 688 16 ( =156 ) & 309 22 ( =219 ) + 180 cm & 3 & 1262 130 ( =223 ) & 974 170 ( =300 ) & 625 9.5 ( =16.4 ) & 556 220 ( =373 ) + & & 1.19 & 0.818 & 0.684 & 0.756 + & & 0.315 & 0.514 & 0.603 & 0.554 + & & 0.0131 & 0.00906 & 0.00759 & 0.00838 + & & 0.567 & 0.906 & 0.482 & 0.962 + & & + 140 - 150 cm & 3 & 1043 41 ( =70.2 ) & 784 56 ( =97 ) & 746 63 ( =109 ) & 190 16 ( =28 ) + 150 - 160 cm & 19 & 1067 39 ( =171 ) & 757 34 ( =146 ) & 611 36 ( =156 ) & 334 52 ( =228 ) + 160 - 170 cm & 59 & 1162 25 ( =191 ) & 830 29 ( =226 ) & 664 22 ( =165 ) & 371 41 ( =315 ) + 170 - 180 cm & 30 & 1241 33 ( =182 ) & 862 44 ( =238 ) & 635 27 ( =148 ) & 436 68 ( =371 ) + & & 3.84 & 0.928 & 0.974 & 0.806 + & & 0.0118 & 0.43 & 0.408 & 0.493 + & & 0.0972 & 0.0254 & 0.0266 & 0.0221 + & & 1.12 & 0.502 & 0.886 & 0.687 + & + 0 - 9 years & 0.000332 & 0.143 & 0.662 & 0.127 + 10 - 19 years & 0.311 & 0.822 & 0.478 & 0.926 + 20 - 29 years & 0.315 & 0.514 & 0.603 & 0.554 + 30 - 39 years & 5 & 0.595 & 0.00388 & 0.0423 + 40 - 49 years & 0.000142 & 0.489 & 0.00545 & 0.0649 + 50 - 59 years & 0.0118 & 0.43 & 0.408 & 0.493 + 60 - 69 years & 0.091 & 0.23 & 0.24 & 0.182 + 70 years & 0.627 & 0.0424 & 0.0506 & 0.107 + & + 0 - 9 years & 3.42 & 1.11 & 2 & 1.04 + 10 - 19 years & 0.883 & 0.346 & 0.519 & 0.275 + 20 - 29 years & 0.567 & 0.906 & 0.482 & 0.962 + 30 - 39 years & 2.32 & 0.943 & 0.702 & 1.28 + 40 - 49 years & 2.48 & 0.671 & 0.993 & 0.944 + 50 - 59 years & 1.12 & 0.502 & 0.886 & 0.687 + 60 - 69 years & 0.89 & 0.436 & 0.645 & 0.597 + 70 years & 1.08 & 2.05 & 2.01 & 1.7 +we consider a few possible statistical indicators of agreement between coders .cohen s is a very popular indicator to compare the agreement between two coders , based on the equation where stands for the agreement rate between coders and for the probability of random agreement .the agreement between pairs of coders according to this statistics is shown in table [ tablecohen ] .max age + & 0.815 & 0.961 & 0.636 & 0.476 & 0.582 & 0.555 + & 0.923 & 0.978 & 0.728 & 0.808 & 0.839 & 0.866 + & 0.810 & 0.944 & 0.647 & 0.449 & 0.508 & 0.526 +these results show that in general the agreement is higher for gender , followed by purpose and relation .the agreement between coders 1 and 3 is similar also concerning age , while the agreement with coder 2 is quite poor in these categories .although there is no real sound mathematical way to evaluate the absolute value of these numbers , according to popular benchmarks , an agreement between 0.8 and 1 is considered as `` almost perfect '' , an agreement between 0.6 and 0.8 as `` substantial '' , while an agreement between 0.4 and 0.6 is only `` moderate '' it generalises eq .[ kappa ] to deal with multiple coders and categories .the corresponding values are shown in table [ tablefleiss ] age + 0.849 & 0.961 & 0.669 & 0.289 & 0.332 & 0.300 + we see that , in relative terms , agreement is higher for gender , followed by purpose and relation , and lowest for age . in absolute terms , according to the benchmarks , we have almost perfect agreement in gender and purpose , substantial in relation and `` fair '' ( i.e. , worst than `` moderate '' ) for age indicators , due to the effect of the different coding by coder 2 . anyway ,if we try to plot the age difference between coders , as in figure [ fzey ] , we see that although disagreement with coder 2 is substantial , it is almost completely limited to a tendency of coder 2 to put pedestrians in a slightly younger category , i.e. the difference in age between the codings is limited .nevertheless the fleiss indicator does not take in account the magnitude of difference , and is thus not completely adequate to deal with ordered data .the krippendorff statistics , that allows for consideration of quantitative differences between coding results , gives the results shown in table [ tablekrip ] . age + 0.849 & 0.961 & 0.669 & 0.709 &0.730 & 0.729 + krippendorff does not provide any `` magic number '' but suggests to use data with at least ( satisfied by all our categories ) and require , satisfied by purpose and gender , for reliable results ( between 0.667 and 0.8 could be used for `` tentative conclusions '' ) . using popular indicators of coder reliability ,we have found that , in relative terms , the most reliable coding regards gender , followed by purpose . in absolute terms , according to the krippendorff statistics that can better cope with the nature of our data , we may see that the purpose and gender codings may be considered as enough reliable to provide sound findings , while the relation and age codings are reliable enough for reporting tentative findings .the analysis based on these indicators provides an estimate on the reliability of coding of pedestrians in different categories .we may nevertheless use another approach to test the reliability of our findings when based on different coding processes .since for each category we analyse the values of the observables , , and , we may compare these quantitative results between different coders .this comparison , which has also the advantage of being based on more mathematically sound statistical indicators ( standard errors , anova analysis ) is performed in section [ codercomp ] , and shows again that for purpose and gender we have an almost perfect quantitative agreement , while for relation and age , although the agreement is less good , the major patterns of behaviour are qualitatively observed regardless of coders .the results ( on the common subset of data ) for the purpose dependence of all observables between the main coder ( coder 1 ) and the secondary coders are compared in tables [ table1f1 ] , [ table1f2 ] and [ table1f3 ] . & & + leisure & 136 & 1085 19 ( =220 ) & 796 21 ( =248 ) & 636 13 ( =151 ) & 351 28 ( =327 ) + work & 132 & 1257 14 ( =157 ) & 829 17 ( =196 ) & 723 12 ( =143 ) & 303 21 ( =241 ) + & & 53.1 & 1.41 & 23.5 & 1.88 + & & & 0.236 & 2.14 & 0.171 + & & 0.166 & 0.00529 & 0.0811 & 0.00703 + & & 0.893 & 0.146 & 0.594 & 0.168 + & & + leisure & 151 & 1093 17 ( =212 ) & 793 20 ( =243 ) & 641 12 ( =147 ) & 344 26 ( =318 ) + work & 117 & 1269 15 ( =159 ) & 837 18 ( =196 ) & 728 13 ( =146 ) & 306 22 ( =243 ) + & & 56.2 & 2.56 & 23.4 & 1.13 + & & & 0.111 & 2.18 & 0.289 + & & 0.175 & 0.00954 & 0.081 &0.00422 + & & 0.927 & 0.198 & 0.599 & 0.131 + & & + leisure & 133 & 1077 19 ( =217 ) & 789 22 ( =250 ) & 626 13 ( =145 ) & 354 29 ( =330 ) + work & 133 & 1262 14 ( =156 ) & 836 17 ( =195 ) & 732 12 ( =144 ) & 302 21 ( =239 ) + & & 63.6 & 2.93 & 35.6 & 2.13 + & & & 0.0881 & & 0.145 + & & 0.194 & 0.011 & 0.119 & 0.00802 + & & 0.982 & 0.211 & 0.734 & 0.18 + the differences between coders are thus always of one standard error or smaller , and the extremely significant statistical differences in the and distribution ( along with the less significant and ones ) are reported by all coders .the results ( on the common subset of data ) for the relation dependence of all observables between the main coder ( coder 1 ) and the secondary coders are compared in tables [ table2f1 ] , [ table2f2 ] and [ table2f3 ] .while all the major trends exposed in the main text are confirmed , quantitative results between coders may sometimes be different ( we refer in particular to the distribution for couples , extremely narrow according to coder 3 ) . & & + colleagues & 125 & 1256 14 ( =154 ) & 829 18 ( =196 ) & 725 13 ( =142 ) & 301 21 ( =239 ) + couples & 28 & 1087 37 ( =194 ) & 690 33 ( =174 ) & 611 21 ( =112 ) & 248 37 ( =198 ) + families & 40 & 1051 24 ( =153 ) & 864 54 ( =341 ) & 594 21 ( =134 ) & 492 69 ( =438 ) + friends & 56 & 1121 36 ( =271 ) & 777 24 ( =182 ) & 669 19 ( =145 ) & 286 32 ( =243 ) + & & 16.4 & 4.19 & 11.8 & 6.12 + & & & 0.00651 & 3.06 & 0.0005 + & & 0.167 & 0.0488 & 0.126 & 0.0697 + & & 1.33 & 0.612 & 0.934 & 0.678 + & & + colleagues & 116 & 1267 14 ( =156 ) & 839 18 ( =197 ) & 729 14 ( =147 ) & 308 23 ( =244 ) + couples & 44 & 1082 28 ( =184 ) & 703 21 ( =140 ) & 582 19 ( =125 ) & 296 33 ( =221 ) + families & 42 & 1054 25 ( =164 ) & 894 53 ( =341 ) & 651 25 ( =163 ) & 451 70 ( =457 ) + friends & 66 & 1131 31 ( =254 ) & 786 23 ( =188 ) & 673 17 ( =136 ) & 304 29 ( =238 ) + & & 19 & 6.55 & 11.9 & 3.13 + & & & 0.000276 & 2.54 & 0.0262 + & & 0.178 & 0.0692 & 0.119 & 0.0344 + & & 1.35 & 0.74 & 1.04 & 0.437 + & & + colleagues & 136 & 1259 14 ( =158 ) & 834 17 ( =194 ) & 727 13 ( =147 ) & 304 21 ( =242 ) + couples & 23 & 1070 42 ( =204 ) & 624 20 ( =96.4 ) & 578 20 ( =95.1 ) & 182 17 ( =81.2 ) + families & 50 & 1053 24 ( =172 ) & 867 44 ( =312 ) & 612 20 ( =140 ) & 478 59 ( =416 ) + friends & 54 & 1084 32 ( =235 ) & 780 27 ( =196 ) & 663 22 ( =159 ) & 298 33 ( =245 ) + & & 23.4 & 7.57 & 12.4 & 7.52 + & & & 7.11 & 1.36 & 7.61 + & & 0.213 & 0.0807 & 0.125 & 0.0801 + & & 1.27 & 0.915 & 1.06 & 0.849 + the results ( on the common subset of data ) for the gender dependence of all observables between the main coder ( coder 1 ) and the secondary coders are compared in tables [ table3f1 ] , [ table3f2 ] and [ table3f3 ] , showing that there is basically no difference in the coding of gender . & & + two females & 55 & 1076 32 ( =240 ) & 745 21 ( =155 ) & 629 15 ( =112 ) & 290 33 ( =242 ) + mixed & 86 & 1095 19 ( =173 ) & 820 31 ( =287 ) & 641 17 ( =159 ) & 384 39 ( =360 ) + two males & 127 & 1261 16 ( =178 ) & 836 17 ( =195 ) & 727 13 ( =150 ) & 305 22 ( =243 ) + & & 27.2 & 3.25 & 12.8 & 2.48 + & & & 0.0404 & 5.09 & 0.0855 + & & 0.171 & 0.0239 & 0.0879 & 0.0184 + & & 0.93 & 0.494 & 0.699 & 0.292 + & & + two females & 53 & 1078 33 ( =241 ) & 747 22 ( =158 ) & 637 15 ( =106 ) & 286 32 ( =233 ) + mixed & 89 & 1093 18 ( =173 ) & 814 30 ( =283 ) & 635 17 ( =159 ) & 382 38 ( =360 ) + two males & 126 & 1263 16 ( =177 ) & 838 17 ( =194 ) & 728 13 ( =150 ) & 306 22 ( =244 ) + & & 28.2 & 3.12 & 13.3 & 2.48 + & & & 0.0459 & 3.22 & 0.0853 + & & 0.176 & 0.023 & 0.091 & 0.0184+ & & 0.935 & 0.494 & 0.604 & 0.3 + & & + two females & 55 & 1074 32 ( =239 ) & 742 21 ( =153 ) & 636 14 ( =103 ) & 281 31 ( =230 ) + mixed & 89 & 1093 19 ( =175 ) & 824 31 ( =288 ) & 634 17 ( =161 ) & 397 39 ( =368 ) + two males & 124 & 1267 16 ( =173 ) & 834 17 ( =190 ) & 730 13 ( =150 ) & 298 21 ( =232 ) + & & 30.4 & 3.44 & 14.2 & 3.99 + & & & 0.0336 & 1.36 & 0.0196 + & & 0.187 & 0.0253 & 0.0969 & 0.0293 + & & 0.987 & 0.511 & 0.622 & 0.359 + the results ( on the common subset of data ) for the minimum age dependence of all observables between the main coder ( coder 1 ) and the secondary coders are compared in tables [ table4f1 ] , [ table4f2 ] and [ table4f3 ] . sadly , almost no groups with children are present in the common set .the drop in velocity with age is , on the other hand , confirmed in a statistically significant way by all coders . & & + 10 - 19 years & 16 & 1157 86 ( =343 ) & 715 31 ( =123 ) & 653 23 ( =92.3 ) & 223 38 ( =151 ) + 20 - 29 years & 58 & 1183 28 ( =215 ) & 765 28 ( =211 ) & 666 20 ( =149 ) & 268 33 ( =252 ) + 30 - 39 years & 96 & 1186 21 ( =203 ) & 817 21 ( =211 ) & 689 17 ( =166 ) & 327 27 ( =262 ) + 40 - 49 years & 41 & 1193 25 ( =161 ) & 811 27 ( =173 ) & 684 22 ( =143 ) & 327 38 ( =245 ) + 50 - 59 years & 31 & 1210 29 ( =160 ) & 880 46 ( =254 ) & 696 29 ( =160 ) & 407 65 ( =360 ) + 60 - 69 years & 21 & 1017 35 ( =160 ) & 869 66 ( =304 ) & 671 34 ( =156 ) & 401 85 ( =388 ) + 70 years & 5 & 949 15 ( =34.2 ) & 913 170 ( =379 ) & 608 28 ( =61.7 ) & 551 210 ( =470 ) + & & 3.35 & 1.81 & 0.462 & 1.91 + & & 0.00337 & 0.0974 & 0.836 & 0.0789 + & & 0.0715 & 0.0399 & 0.0105 & 0.0421 + & & 1.73 & 0.964 & 0.578 & 1.29 + & & + 0 - 9 years & 2 & 1190 220 ( =312 ) & 749 110 ( =152 ) & 700 87 ( =123 ) & 202 55 ( =77.8 ) + 10 - 19 years & 16 & 1169 84 ( =334 ) & 682 20 ( =80.1 ) & 646 20 ( =78.4 ) & 172 18 ( =73.3 ) + 20 - 29 years & 107 & 1163 19 ( =196 ) & 765 16 ( =165 ) & 655 13 ( =138 ) & 288 22 ( =231 ) + 30 - 39 years & 78 & 1217 24 ( =209 ) & 869 28 ( =244 ) & 727 21 ( =185 ) & 362 33 ( =290 ) + 40 - 49 years & 32 & 1181 31 ( =176 ) & 855 44 ( =249 ) & 645 22 ( =124 ) & 418 66 ( =373 ) + 50 - 59 years & 24 & 1074 32 ( =158 ) & 853 60 ( =293 ) & 706 29 ( =143 ) & 343 74 ( =363 ) + 60 - 69 years & 9 & 1047 42 ( =127 ) & 861 100 ( =311 ) & 655 45 ( =135 ) & 432 120 ( =375 ) + & & 2.1 & 3.09 & 2.3 & 2.13 + & & 0.0533 & 0.0061 & 0.0349 & 0.0505 + & & 0.0461 & 0.0663 & 0.0503 & 0.0467 + & & 0.842 & 0.833 & 0.482 & 1.14 + & & + 10 - 19 years & 14 & 1163 98 ( =367 ) & 701 31 ( =117 ) & 623 35 ( =130 ) & 218 48 ( =181 ) + 20 - 29 years & 64 & 1157 29 ( =236 ) & 758 24 ( =194 ) & 658 20 ( =158 ) & 274 27 ( =220 ) + 30 - 39 years & 50 & 1197 27 ( =193 ) & 830 32 ( =227 ) & 685 23 ( =162 ) & 349 43 ( =302 ) + 40 - 49 years & 77 & 1205 19 ( =163 ) & 832 23 ( =205 ) & 684 16 ( =141 ) & 351 33 ( =293 ) + 50 - 59 years & 36 & 1205 25 ( =152 ) & 820 35 ( =207 ) & 722 28 ( =168 ) & 300 39 ( =233 ) + 60 - 69 years & 20 & 1025 40 ( =179 ) & 903 74 ( =332 ) & 699 29 ( =129 ) & 418 97 ( =436 ) + 70 years & 7 & 956 26 ( =69.1 ) & 881 120 ( =326 ) & 605 36 ( =96.4 ) & 503 140 ( =382 ) + & & 3.69 & 2.04 & 1.33 & 1.67 + & & 0.00153 & 0.0604 & 0.245 & 0.129 + & & 0.0783 & 0.0449 & 0.0296 & 0.0369 + & & 1.74 & 0.755 & 0.732 & 1.09 +table [ table4a_av ] shows the average age dependence of all observables , and the age dependence of variables , , is also graphically shown in figure [ f4a_av ] , while that of is shown in figure [ f4b_av ] ( left panels ) . & & + 10 - 19 years & 60 & 1147 34 ( =264 ) & 865 43 ( =332 ) & 575 20 ( =158 ) & 496 57 ( =445 ) + 20 - 29 years & 370 & 1181 9.2 ( =178 ) & 793 12 ( =226 ) & 662 8.1 ( =155 ) & 313 14 ( =274 ) + 30 - 39 years & 269 & 1213 12 ( =199 ) & 831 14 ( =234 ) & 670 11 ( =174 ) & 360 18 ( =302 ) + 40 - 49 years & 195 & 1172 13 ( =183 ) & 852 17 ( =232 ) & 674 12 ( =167 ) & 387 23 ( =316 ) + 50 - 59 years & 114 & 1157 18 ( =194 ) & 825 20 ( =217 ) & 650 15 ( =159 ) & 376 30 ( =317 ) + 60 - 69 years & 69 & 1032 20 ( =168 ) & 875 40 ( =332 ) & 635 20 ( =163 ) & 467 50 ( =416 ) + 70 years & 12 & 886 29 ( =99.8 ) & 786 79 ( =275 ) & 588 19 ( =66.6 ) & 385 100 ( =363 ) + & & 13.2 & 2.26 & 3.79 & 4.75 + & & & 0.036 & 0.000955 & 8.72 + & & 0.0681 & 0.0124 & 0.0206 &0.0257 + & & 1.67 & 0.275 & 0.598 & 0.603 + ( black and circles ) , ( red and squares ) and ( blue and triangles ) dependence on average ( left ) and maximum ( right ) age .dashed lines provide standard error confidence bars .the point at 75 years corresponds to the `` 70 years or more '' slot.,title="fig : " ] ( black and circles ) , ( red and squares ) and ( blue and triangles ) dependence on average ( left ) and maximum ( right ) age . dashed lines provide standard error confidence bars .the point at 75 years corresponds to the `` 70 years or more '' slot.,title="fig : " ] dependence on average ( left ) and maximum ( right ) age .dashed lines provide standard error confidence bars .the point at 75 years corresponds to the `` 70 years or more '' slot.,title="fig : " ] dependence on average ( left ) and maximum ( right ) age .dashed lines provide standard error confidence bars .the point at 75 years corresponds to the `` 70 years or more '' slot.,title="fig : " ] table [ table4a_max ] shows the average age dependence of all observables , and the age dependence of variables , , is also graphically shown in figure [ f4a_av ] , while that of is shown in figure [ f4b_av ] ( right panels ) . & & + 10 - 19 years & 28 & 1141 55 ( =292 ) & 733 35 ( =186 ) & 628 17 ( =92.5 ) & 283 49 ( =258 ) + 20 - 29 years & 327 & 1177 9.6 ( =174 ) & 789 12 ( =225 ) & 660 8.2 ( =149 ) & 309 15 ( =278 ) + 30 - 39 years & 292 & 1203 12 ( =204 ) & 831 14 ( =238 ) & 648 10 ( =172 ) & 384 19 ( =321 ) + 40 - 49 years & 143 & 1201 15 ( =181 ) & 849 23 ( =275 ) & 680 15 ( =176 ) & 377 28 ( =336 ) + 50 - 59 years & 179 & 1178 14 ( =193 ) & 854 16 ( =217 ) & 674 13 ( =178 ) & 388 23 ( =306 ) + 60 - 69 years & 105 & 1043 17 ( =174 ) & 872 30 ( =310 ) & 638 16 ( =162 ) & 452 40 ( =407 ) + 70 years & 15 & 927 33 ( =128 ) & 760 65 ( =254 ) & 575 18 ( =67.9 ) & 374 85 ( =330 ) + & & 14.2 & 3.37 & 1.97 & 3.7 + & & & 0.0027 & 0.0668 & 0.00122 + & & 0.0731 & 0.0183 & 0.0108 &0.0201 + & & 1.38 & 0.484 & 0.619 & 0.443 + it may be seen that the results concerning minimum ( section [ ageef ] ) , maximum and average age are quite similar above 20 years .nevertheless , using minimum age allows us to spot the presence of children below 10 years of age and verify their peculiar behaviour .tables [ table_hc_av ] and [ table_hc_max ] show the dependence on , respectively , average and maximum height of all observables .figure figures [ comphfigv ] and [ comphfigvbis ] provide , on the other hand , a graphical comparison for the and observables .we show these two figures since these observables are mostly growing with height , and so their analysis is easier .as the figures show , the `` average '' curves are somehow in between the other two curves , with the `` minimum '' curve on the top and the maximum on the bottom , as expected for an observable that grows with height .this suggests that dyads have , at least regarding height , a behaviour that is an average of the individual ones , and all three indicators should be basically equivalent . in the main textwe choose to use the `` minimum '' height indicator for two reasons : it allows to better identify dyads with children , and it has a sufficient number of events in all occupied height slots . & & + 140 cm & 14 & 1044 52 ( =195 ) & 983 98 ( =365 ) & 527 39 ( =145 ) & 672 130 ( =492 ) + 140 - 150 cm & 22 & 1011 36 ( =168 ) & 910 81 ( =382 ) & 562 39 ( =183 ) & 570 100 ( =476 ) + 150 - 160 cm & 118 & 1110 23 ( =253 ) & 812 24 ( =260 ) & 629 14 ( =149 ) & 379 31 ( =340 ) + 160 - 170 cm & 472 & 1140 8.3 ( =181 ) & 821 12 ( =250 ) & 646 7.6 ( =165 ) & 372 15 ( =329 ) + 170 - 180 cm & 421 & 1222 8.7 ( =179 ) & 828 11 ( =224 ) & 685 8 ( =164 ) & 342 14 ( =282 ) + 180 cm & 42 & 1275 27 ( =174 ) & 793 26 ( =171 ) & 693 19 ( =122 ) & 274 34 ( =220 ) + & & 17.9 & 1.95 & 7.31 & 5.74 + & & & 0.0829 & 9.42 & 3.1 + & & 0.0764 & 0.00894 & 0.0327 & 0.0258 + & & 1.54 & 0.816 & 1.3 & 1.29 + & & + 140 cm & 3 & 1172 12 ( =21.4 ) & 650 49 ( =84.4 ) & 597 23 ( =39.1 ) & 189 60 ( =104 ) + 140 - 150 cm & 3 & 1051 79 ( =136 ) & 611 11 ( =19.6 ) & 518 39 ( =67.5 ) & 242 33 ( =57.2 ) + 150 - 160 cm & 49 & 988 25 ( =178 ) & 782 34 ( =240 ) & 594 20 ( =138 ) & 366 49 ( =346 ) + 160 - 170 cm & 336 & 1129 10 ( =191 ) & 820 14 ( =256 ) & 637 8.3 ( =151 ) & 378 19 ( =343 ) + 170 - 180 cm & 556 & 1191 8.1 ( =191 ) & 830 10 ( =237 ) & 671 7.1 ( =167 ) & 359 13 ( =306 ) + 180 cm & 142 & 1250 15 ( =183 ) & 843 21 ( =252 ) & 677 15 ( =182 ) & 365 26 ( =313 ) + & & 18.8 & 1.31 & 4.21 & 0.427 + & & & 0.257 & 0.000849 & 0.83 + & & 0.08 & 0.00601 & 0.0191 & 0.00197 + & & 1.44 & 0.929 & 0.881 & 0.555 + dependence on average ( black and circles ) , minimum ( red and squares ) and maximum ( blue and triangles ) height .dashed lines provide standard error confidence bars .the points at 135 and 185 cm correspond to the `` less than 140 '' and `` more than 180 '' cm slots . ]dependence on average ( black and circles ) , minimum ( red and squares ) and maximum ( blue and triangles ) height .dashed lines provide standard error confidence bars .the points at 135 and 185 cm correspond to the `` less than 140 '' and `` more than 180 '' cm slots . ]99 f. zanlungo , t. ikeda , t. kanda , _potential for the dynamics of pedestrians in a socially interacting group _ , phys .e , 89 , 1 , 021811 , ( 2014 ) f. zanlungo , d. bri and t. kanda , _ spatial - size scaling of pedestrian groups under growing density conditions _ physical review e 91 ( 6 ) , 062810 ( 2015 ) bri , draen and zanlungo , francesco and kanda , takayuki , _ density and velocity patterns during one year of pedestrian tracking _ , transportation research procedia , 2 , 7786 , 2014 m. schultz , l. rger , f. hartmut and b. schlag , _ group dynamic behavior and psychometric profiles as substantial driver for pedestrian dynamics _, in _ pedestrian and evacuation dynamics 2012 _ , u. weidmann , u. kirsh and m. schreckenberg , vol ii , pp . 1097 - 1111 ( 2014 ) m. moussad , n. perozo , s. garnier , d. helbing and g. theraulaz , _ the walking behaviour of pedestrian social groups and its impact on crowd dynamics _ , plos one , 5 , 4 , e10047 , ( 2010 ) k. kumagai , i. ehiro , k. ono , _ numerical simulation model of group walking for tsunami evacuees _ , proceedings of the 2016 pedestrian and evacuation dynamics conference t. kanda , m. shiomi , z. miyashita , h. ishiguro , n. hagita , _ an affective guide robot in a shopping mall _ , acm / ieee international conference on human - robot interaction ( hri ) , 173 - 180 ( 2009 ) m. shiomi , f. zanlungo , k. hayashi , t. kanda , _ towards a socially acceptable collision avoidance for a mobile robot navigating among pedestrians using a pedestrian model _ , international journal of social robotics , 6 , 3 , 443455 , 2014 r. murakami , s. morales , s , satake , t. kanda , _ destination unknown : walking side - by - side without knowing the goal _ , proceedings of the 2014 acm / ieee international conference on human - robot interaction , 471478 , 2014 m. costa , _ interpersonal distances in group walking _ , journal of nonverbal behavior , 34 , 1 , 15 - 26 ( 2010 ) f. zanlungo and t. kanda , _ do walking pedestrians stabily interact inside a large group ?analysis of group and sub - group spatial structure _ , cogsci13 , ( 2013 ) a. gorrini , g. vizzari , s. bandini , _ granulometric distribution and crowds of groups : focusing on dyads _ , 11th conference international traffic and granular flow , delft ( nl ) , 2015 s. bandini , l. crociani , a. gorrini , g. vizzari , _ an agent - based model of pedestrian dynamics considering groups : a real world case study _ , intelligent transportation systems ( itsc ) , 2014 ieee 17th international conference on , 572577 g. kster , m. seitz , f. treml , d. hartmann and w. klein , _ on modeling the influence of group formation in a crowd _ ,contemporary social science , 6 , 3 , 397 - 414 , ( 2011 ) g. kster , f. treml , m. seitz , and w. klein , _ validation of crowd models including social groups_. in ulrich weidmann , uwe kirsch , and michael schreckenberg , editors , pedestrian and evacuation dynamics 2012 , pages 1051 - 1063 .springer international publishing , 2014 x. wei , x. lv , w. song , x. li , _ survey study and experimental investigation on the local behavior of pedestrian groups _ , complexity , volume 20 , issue 6 july / august 2015 pages 8797 i. karamouzas , and m. overmars , _ simulating the local behaviour of small pedestrian groups _ , proceedings of the 17th acm symposium on virtual reality software and technology , 183 - 190 , ( 2010 ) y. zhang , j. pettr , x. qin , s. donikian and q. peng , _ a local behavior model for small pedestrian groups _ , computer - aided design and computer graphics ( cad / graphics ) , 2011 12th international conference on , 275 - 281 ( 2011 ) n. bode , s. holl , w. mehner , a. seyfried , _ disentangling the impact of social groups on response times and movement dynamics in evacuations _ , plosone http://dx.doi.org/10.1371/journal.pone.0121227 a. gorrini , l. crociani , c. feliciani , p. zhao , k. nishinari , s. baldini , _ social groups and pedestrian crowds : experiment on dyads in a counter flow scenario _ , proceedings of the 2016 pedestrian and evacuation dynamics conference p. zhao , l. sun , l. cui , w. luo , y. ding , _ the walking behaviours of pedestrian social group in the corridor of subway station _ , proceedings of the 2016 pedestrian and evacuation dynamics conference c. von krchten , frank mller , anton svachiy , oliver wohak , andreas schadschneider , _ empirical study of the influence of social groups in evacuation scenarios _ traffic and granular flow 15 ( springer , 2016 ) w. wang , s. lo , s. liu , j. ma , _ a simulation of pedestrian side - by - side walking behaviour and its impact on crowd dynamics _ , proceedings of the 2016 pedestrian and evacuation dynamics conference j. huang , x. zou , x. qu , j. ma , r. xu , _ a structure analysis method for complex social pedestrian groups with symbol expression and relationship matrix _ , proceedings of the 2016 pedestrian and evacuation dynamics conference f. zanlungo and t. kanda , _ a mesoscopic model for the effect of density on pedestrian group dynamics _ europhysics letters , 111 , 38007 ( 2015 ) g. turchetti , f. zanlungo , b. giorgini , _ dynamics and thermodynamics of a gas of automata _ epl ( europhysics letters ) 78 ( 5 ) , 58003 f. zanlungo , d. bri , t. kanda , _ pedestrian group behaviour analysis under different density conditions _ , transportation research procedia , 2 , 149158 , 2014 r. bohannon , _ comfortable and maximum walking speed of adults aged 20 - 79 years : reference values and determinants _ , age and ageing , 26 , 15 - 19 , 1997 n. bode , _ the effect of social groups and gender on pedestrian behaviour immediately in front of bottlenecks _ , proceedings of the 2016 pedestrian and evacuation dynamics conference l. henderson , d. lyons , it sexual differences in human crowd motion , nature 240 , 353 - 355 ( 08 december 1972 ) ; i. von sivers , f. knzner , g. kster , _ pedestrian evacuation simulation with separated families _ , proceedings of the 2016 pedestrian and evacuation dynamics conference y. feng , d. li , ` _ an empirical study and conceptual model on heterogenity of pedestrian social groups for friend -group and family - group _ , proceedings of the 2016 pedestrian and evacuation dynamics conference f. mller , a. schadschneider , _ evacuation dynamics of asymmetrically coupled pedestrian pairs _ , traffic and granular flow ' 15 ( springer , 2016 ) f. zanlungo , z. ycel , t. kanda , _ the effect of social roles on group behaviour _ , proceedings of the 2016 pedestrian and evacuation dynamics conference d. bri , t. kanda , t. ikeda , t. miyashita _ person tracking in large public spaces using 3-d range sensors _ , ieee transactions on human machine systems , vol 43 , no 6 , pp 522 - 534 , ( 2013 ) http://www.irc.atr.jp/sets/groups/ s. seer , n. brndle , c. ratti , _ kinects and human kinetics : a new approach for studying crowd behavior _ ,transportation research part c : emerging technologies , 48(0):212 - 228 , 2014 a. corbetta , l. bruno , a. muntean , f. toschi , _ high statistics measurements of pedestrian dynamics _ , transportation research procedia , 2 , 96 - 104 m. knapp , _ nonverbal communication in human interaction _ , cengage learning ( 2012 ) c. kleinke , _ gaze and eye contact : a research review ._ , psychological bulletin , 100 , 1 , 78 , 1986 m. argyle , j. dean , _ sociometry _ , 289 - 304 , 1965 j. zhang , w. klingsch , a. schadschneider , a. seyfried , _ transitions in pedestrian fundamental diagrams of straight corridors and t - junctions _ , journal of statistical mechanics : theory and experiment , 2011 , 06 , p06004 , 2011 a. corbetta , m. jasper , c. lee , f. toschi , _ continuous measurements of real - life bidirectional pedestrian flows on a wide walkway _ , proceedings of the 2016 pedestrian and evacuation dynamics conference http://www.mext.go.jp/b_menu/toukei/001/022/2004/002.pdf ( in japanese ) z. ycel , f. zanlungo , t. ikeda , t. miyashita , and n. hagita _ deciphering the crowd : modeling and identification of pedestrian group motion _, sensors , vol . 13 , pp .875 - 897 , 2013 d. bri , f. zanlungo , t. kanda , _ modelling of pedestrian groups and application to group recognition _ , submitted to mipro 2016 , opatja , croatia r. ash , _ statistical inference , a concise course _ , dover 2011 w. press , s. teukolsky , w. vetterling , b. flannery , _ numerical recipes in c _ , second edition , cambridge university press , 1992 .we use the gamma function routine of page 214 , the incomplete beta function routine of page 228 , and the f test routine of page 619 , adapted by us to a single tail test .j. cohen , _ statistical power analysis for the behavioural sciences _ ,second edition , routledge , 1988 cohen , jacob _ a coefficient of agreement for nominal scales_. educational and psychological measurement 20 ( 1 ) : 3746 ( 1960 ) .landis , j.r . ; koch , g.g .( 1977 ) . _ the measurement of observer agreement for categorical data _ biometrics . 33 ( 1 ) : 159174 .fleiss , j. l. ( 1971 ) _ measuring nominal scale agreement among many raters _, psychological bulletin , vol .378382 krippendorff , klaus ._ reliability in content analysis _, human communication research 30.3 ( 2004 ) : 411 - 433 . | in recent years , researchers in pedestrian behaviour and crowd modelling have become more and more interested in the behaviour of walking social groups , since these groups represent an important portion of pedestrian crowds , and present peculiar dynamical features . it is anyway clear that , being group dynamics determined by human social behaviour , it probably depends on properties such as the purpose of the pedestrians , their personal relation , their gender , age , and body size . we may call these the `` intrinsic properties '' of the group ( opposed to extrinsic ones such as crowd density or environmental features ) . in this work we quantitatively analyse the dynamical properties of pedestrian dyads ( distance , spatial formation and velocity ) by analysing a large data set of automatically tracked pedestrian trajectories in an unconstrained `` ecological '' setting ( a shopping mall ) , whose relational group properties have been analysed by three different human coders . we observed that females walk slower and closer than males , that workers walk faster , at a larger distance and more abreast than leisure oriented people , and that inter group relation has a strong effect on group structure , with couples walking very close and abreast , colleagues walking at a larger distance , and friends walking more abreast than family members . pedestrian height ( obtained automatically through our tracking system ) influences velocity and abreast distance , both growing functions of the average group height . results regarding pedestrian age show as expected that elderly people walk slowly , while active age adults walk at the maximum velocity . groups with children have a strong tendency to walk in a non abreast formation , with a large distance ( despite a low abreast distance ) . a cross - analysis of the interplay between these intrinsic features , taking in account also the effect of extrinsic crowd density , confirms these major effects but reveals also a richer structure . an interesting and unexpected result , for example , is that the velocity of groups with children _ increases _ with density , at least in the low - medium density range found under normal conditions in shopping malls . children also appear to behave differently according to the gender of the parent . |
our analysis of human mobility is based on a data set of 230 volunteers six - week travel diaries in frauenfeld , switzerland .this data set contains the volunteers personal information , including age , job and sex , and 36761 trip records . by calculating the spherical distance between the origin and destination from their longitudes and latitudes ,we can obtain the length of each trip ( see details about data in * methods * ) .we first measure the individual displacement distributions from the data set .1(a)-1(c ) show three typical individuals displacement distributions ( table s1 presents all volunteers displacement distributions ) , from which we can not find any universal scaling properties . indeed , when we use the _ kolmogorov - smirnov _test to test whether the distributions fit power laws , we find that 87.8% of the individuals can not pass the test ( statistical validation results are listed in table s2 , and the details about _ kolmogorov - smirnov _ test are shown in * methods * ) . this result strongly suggests the absence of scaling laws in human travel at the individual level . to reveal the underlying structure of individual trips , we assign to each individual a mobility network , in which nodes denote locations visited by individuals , edges represent the trips between nodes and edge weight is defined as the number of corresponding trips . figs .1(d)-1(f ) show three typical individuals mobility networks ( all networks are presented in table s1 ) . as shown in fig .1 and table s1 , for most students and employees , their edge weights are highly heterogeneous . for each individual, we call the trip corresponding to the edge with the largest weight the _ dominant trip _ and define the domination ratio as the ratio of the weight of the dominant trip to the total weight . fig . 2 reports the distribution of domination ratios for different groups of individuals , from which we can see that the student group has the largest on average and the employees average domination ratio is smaller than that of the students but larger than that of the other group .the difference of results from the fact that students and employees frequently travel between homes and schools / workplaces in working days but retirees or homemakers do not have to do so .the peak values in the displacement distributions of students and employees are thus usually determined by the lengths of their dominant trips . because the lengths of dominant trips are not necessarily small , the displacement distribution for an individual is usually not right - skewed and is far different from a power law .in addition , the significant role of the dominant trip indicates that an individual s traveling process in general can not be characterized by the lvy flight or truncated lvy flight .* individual mobility patterns . *( a - c ) displacement distributions for three typical individuals ( ( a ) a student , ( b ) an employee , ( c ) a retiree ) , where the peak values for the student and the employee result from the trips between two most frequently visited locations .( d - f ) mobility networks for the three individuals , where the area of a node is proportional to its number of visits and the width of an edge is proportional to its weight.,width=325 ] * distribution of the domination ratios . * ( a ) population .( b ) student group .( c ) employee group .( d ) others . is the number of group members , and is the average domination ratio.,width=325 ] * displacement distribution of the aggregated data . *the solid line indicates a power law with an exponential cutoff .the data were binned using the logarithmic binning method ( see * methods * for details).,width=325 ] * cumulative displacement distributions for a single mode of transportation . * ( a ) 12,028,929 taxi passenger trajectories in beijing .( b ) 46,541 car trips in detroit .( c ) 783,210 bus trips in shijiazhuang .( d ) 205,534 air - flight travels in us.,width=325 ] the aggregated displacement distribution of individuals ( see fig .3 ) is well approximated by a power law with an exponential cutoff ( the fitness significance -value by the _ kolmogorov - smirnov _test is 1.000 and the standard _ kolmogorov - smirnov _ distance is 0.039 , see * methods * and fig .s1 for details ) , which is similar to those observed for bank notes and mobile phone users .as shown above , this scaling property is not a simple combination of many analogous individuals .we assume that the total travel cost is , the number of trips with cost is .according to the _ maximum entropy principle _ , the two constraints , and , lead to the solution , where is the average travel cost .denote the density of trips with cost by , then .the travel cost is commonly approximated as the weighted sum , where and are two coefficients , and and are the costs involving time and money , respectively .previous empirical studies have suggested that the monetary cost is approximately proportional to the travel distance as , while the travel time approximately obeys a hybrid form , where , , and are coefficients . the logarithmic term results from the mixture of modes of transportation .apparently , people move faster when traveling longer distances : we walk from classroom to office but take an airplane from us to china .figure s2 reports the statistics related to travel times of the data set used in this paper .although the data set is not large enough and contains some noisy points , overall speaking , the travel time grows in a hybrid form as mentioned above , with and . integrating the aforementioned terms, we obtain the displacement distribution , where and .when is large , the distribution is approximated as a power - law with an exponential cutoff .indeed , for the real data , and , so for , that is , the term can be neglected .as shown in fig .s3 , the corresponding fitting line is very close to a power law with an exponential cutoff ( but with a slightly higher power - law exponent 1.38 ) .a direct corollary of maximum entropy principle is that the displacement distribution should follow an exponential form if it only accounts for trips from a single mode of transportation because in that case , .this corollary gets supportive evidences from a number of empirical studies on disparate systems ( bazzani _ et al ._ observed a slight deviation from the exponential law ) .4 reports empirical cumulative distributions for taxi trajectories in beijing , car trips in detroit ( downloaded from _ www.semcog.org_ ) , bus trips in shijiazhuang ( collected by the authors ) and air flights in the us .the probability density distributions are shown in fig .s4 . all distributions can be well characterized by exponential - like functions .the general lessons that we learned from the present analysis could be used to refine our knowledge of human mobility patterns .the displacement distributions for aggregated data usually display power - law decay with an exponential cutoff . meanwhile , there are examples ranging from taxi trips to air flights in which the displacement distributions are exponential . in these examples ,every displacement distribution is generated by trips involving a single mode of transportation , which corresponds to a linear relation between the travel cost and distance and eventually results in an exponential displacement distribution according to the principle of maximum entropy . in a word, we believe the travel cost is one main reason resulting in the regularities in aggregated statistics .the present results suggest that the form ( power law or exponential or other ) of deterrence function in the gravity law for human travel may be sensitive to the modes of transportation under consideration .this study warns researchers of the risk of inferring individual behavioral patterns directly from aggregated statistics .analogously , the temporal burstiness of human activities is widely observed , and the researchers are aware of the fact that the aggregated scaling laws could either be a combination of a number of individuals , each of whom displays scaling laws similar to the population , or the result of a mixture of diverse individuals , most of whom exhibit far different statistical patterns than the population . in comparison ,such issues are less investigated for spatial burstiness .in particular , experimental analyses on individuals has rarely been reported .determining whether the displacement distribution of an individual follows a power - law distribution will require further data and analysis .it is already known to the scientific community that a number of poissonian agents with different acting rates can make up a power - law inter - event time distribution at the aggregated level , and very recently , proekt _ et al . _ showed that the aggregated scaling laws on inter - event time distribution may be resulted from different time scales . have applied similar ( yet different ) idea in explaining the aggregated scaling laws in walking behavior .although being mathematically and technically different , this work embodies some similar perspectives , because the different transportation modes indeed assign different scales onto space : the world becomes smaller by air flights while a city is really big by walking .elegant analogy between temporal and spatial human behaviors will benefit the studies of each other .many known mechanisms underlie the scaling laws of complex systems , including rich get richer , good get richer , merging and regeneration , optimization , hamiltonian dynamics , stability constraints , and so on . the individual mobility model by song _ is a typical example embodying the rich get richer mechanism .we have implemented such model . as shown in fig .s5 , the exploration and preferential return model can well reproduce the diversity of individual mobility patterns .in addition , for this model , the gibbs entropy of the displacement distribution at the individual level increases continuously due to the increasing number of locations as well as links connecting location pairs . however , the exploration and preferential return model does not explain why the lengths of exploration trips should follow a power law , which is a core assumption leading to the power - law - like aggregated displacement distribution .therefore , our work has complemented ref . and other related works in two aspects : ( i ) providing supportive empirical observation at the individual level ; ( ii ) providing alternative explanation on the emergence of scaling in aggregated displacement distribution . very recently , from the analysis on mobility patterns in an online game , szell _ et al ._ observed a characteristic jump length and guessed that the existence of the characteristic length may be due to the single mode of transportation .the present theory could explain their observation since a jump in such online game costs time that is proportional to the jump length .* data description . *this work was performed using a travel survey data set that contains 230 volunteers six - week travel diaries in frauenfeld , switzerland .the survey was conducted among 230 volunteers from 99 households in frauenfeld and the surrounding areas in canton thurgau from august to december 2003 .the volunteers reported their daily travel by filling out ( paper and pencil based ) self - administrated questionnaire day by day in a six - week period .each reported trip includes the information of origin , destination and purpose .the origin and destination of a trip were geocoded by longitude and latitude .the quality of the geocoding is very high - with 60% of trips captured within 100 m of their true origins and destinations and90% within 500 m. the purpose of trip was classified into work , shopping , education , home , leisure , business and other .the data has been cross - checked to ensure the consistency and filtered to remove outliers as well as unclear and omitted destination addresses . the final cleaned data set includes 36761 trip records .besides , the data set also contains socio - demographic characteristics of the volunteers personal information such as age , job and sex . * kolmogorov - smirnov ( ks ) test . * given an observed distribution , we firstly assume that it obeys a certain form , with a set of parameters , whose values are estimated by using the maximum likelihood method .the standard ks distance is defined as the maximal distance between the cumulative density functions of the observed data and the fitting curve , namely .we independently sample a set of data points according to , such that the number of sampled data points is the same as the number of observed data points , and then calculate the maximal distance ( denoted by ) between and the cumulative density function of the sampled data points .the -value is defined as the probability that . in this paper, we always implement 1000 independent runs to estimate the -value .* logarithmic binning . *the statistical nature of sampling will lead to the increasing noise in the tails of empirical power - law - type distributions .applying the procedure of logarithmic binning can smooth the noisy tail .logarithmic binning is a procedure of averaging the data that fall in the specific bins whose size increases exponentially . for each binthe observed value are normalized by dividing by the bin width and the total number of observations ( see fig .this work was partially supported by the national natural science foundation of china ( nnsfc ) under grant nos . 11222543 , 11205040 , 11275186 and 91024026 , program for new century excellent talents in university under grant no .ncet-11 - 0070 , and huawei university - enterprise cooperation project under grant no .ybcb2011057 . 99 gonzlez , m. c. , hidalgo , c. a. & barabsi , a. l. understanding individual human mobility patterns . _ nature _ * 453 * , 779 - 782 ( 2008 ) .lu , x. , bengtsson , l. & holme , p. predictability of population displacement after the 2010 haiti earthquake ._ * 109 * , 11576 - 11581 ( 2012 ) .jiang , b. , yin , j. & zhao , s. characterizing the human mobility pattern in a large street network .e _ * 80 * , 021136 ( 2009 ) .yoon , j. , noble , b. d. , liu , m. & kim , m. building realistic mobility models from coarse - grained traces . in ._ proc . of the acm mobisys06 _ , ( uppsala , sweden ) , pp 177 - 190 ( 2006 ) .vespignani , a. predicting the behavior of techno - social systems ._ science _ * 325 * , 425 - 428 ( 2009 ) .barthlemy , m. spatial networks .* 499 * , 1 - 101 ( 2011 ) .balcan , d. & vespignani , a. phase transitions in contagion processes mediated by recurrent mobility patterns ._ * 7 * , 581 - 586 ( 2011 ) .belik , v. , geisel , t. & brockmann , d. natural human mobility patterns and spatial spread of infectious diseases .x _ * 1 * , 011001 ( 2011 ) .ni , s. & weng , w. impact of travel patterns on epidemic dynamics in heterogeneous spatial metapopulation networks .e _ * 79 * , 016111 ( 2009 ) .zhao , z .- d . ,liu , y. & tang , m. epidemic variability in hierarchical geographical networks with human activity patterns . _chaos _ * 22 * , 023150 ( 2012 ) .horner , m. w. & okelly , m. e. s. embedding economies of scale concepts for hub networks design . _j. transp .geogr . _ * 9 * , 255 - 265 ( 2001 ) .um , j. , son , s .- w . , lee , s .-i . , jeong , h. & kim , b. j. scaling laws between population and facility densities ._ * 106 * , 14236 - 14240 ( 2009 ) .zheng , v. m. , zheng , y. , xie , x. & yang , q. collaborative location and activity recommendations with gps history data . in _ proceedings of the 19th international conference on world wide web _ , ( new york , acm press ) , pp 1029 - 1038 ( 2010 ) .clements , m. , serdyukov , p. , de vries a. p. & reinders , m. j. t. personalised travel recommendation based on location co - occurrence .scellato , s. , noulas , a. & mascolo c. exploiting place features in link prediction on location - based social networks . in _ proc . of the acm kdd11 _ , ( new york , acm press ) , pp 1046 - 1054 ( 2011 ) .brockmann , d. , hufnagel , l. & geisel , t. the scaling laws of human travel ._ nature _ * 439 * , 462 - 465 ( 2006 ) ., hao , q. , wang , b .- h . & zhou , t. origin of the scaling law in human mobility : hierarchy of traffic systems . _ phys .e _ * 83 * , 036117 ( 2011 ) .song , c. , koren , t. , wang , p. & barabsi , a. l. modelling the scaling properties of human mobility .phys . _ * 6 * , 818 - 823 ( 2010 ) .petrovskii , s. , mashanova , a. & jansen , v. a. a. variation in individual walking behavior creates the impression of a lvy flight . _proc . natl ._ * 108 * , 8704 - 8707 ( 2011 ) .chalasani , v. s. , engebretsen , .denstadli , j. m. & axhausen , k.w .precision of geocoded locations and network distance estimates ._ j. transport .* 8 * , 1 - 15 ( 2005 ) .clauset , a. , shalizi , c. r. & newman , m. e. j. power - law distributions in empirical data ._ siam rev . _ * 51 * , 661 - 703 ( 2009 ) .song , c. , qu , z. , blumm , n. & barabsi , a. l. limits of predictability in human mobility ._ science _ * 327 * , 1018 - 1021 ( 2010 ) .balescu , r. _ equilibrium and nonequilibrium statistical mechanics_. ( new york : john wiley ) , ( 1975 ) .willumsen , l. g. travel networks . in _ handbook of transport modelling _, eds hensher , d. a. & button , k. j. ( new york : pergamon ) , pp 165 - 180 ( 2000 ) .rietveld , p. , zwart , b. , van wee , b. & van den hoorn , t. on the relationship between travel time and travel distance of commuters .sci . _ * 33 * , 269 - 287 ( 1999 ) .li , s. , wang , h. & wang , z. a study on tour time planning of domestic sightseeing travel itineraries ._ * 20 * , 51 - 56 ( 2005 ) .oosterhaven , j. a. & rietveld , p. transport costs , location and the economy . in . _ location and competition _, eds brakman , s. & garretsen , h. ( new york : routledge ) , pp 32 - 60 ( 2005 ) .bazzani , a. , giorgini , b. , rambaldi , s. , gallotti , r. & giovannini , l. statistical laws in urban mobility from microscopic gps data in the area of florence , _ j. stat ._ p05001 ( 2010 ) .roth , c. , kang , s. m. , batty , m. & barthlemy , m. structure of urban movements : polycentric activity and entangled hierarchical flows ._ plos one _ * 6 * , e15923 ( 2011 ) .jiang , b. & jia , t. exploring human mobility patterns based on location information of us flights .arxiv:1104.4578v2 liang , x. , zheng , x. , l , w. , zhu , t. & xu , k. the scaling of human mobility by taxis is exponential ._ physica a _ * 391 * , 2135 - 2144 ( 2012 ) .gallotti , r. , bazzani , a. & rambaldi , s. towards a statistical physics of human mobility .c _ * 23 * , 1250061 ( 2012 ) .peng , c. , jin , x. , wong , k. c. , shi , m. & li , p. collective human mobility pattern from taxi trips in urban area ._ plos one _ * 7 * , e34487 ( 2012 ) .simini , f. , gonzlez , m. c. , maritan , a. & barabsi , a .-l . a universal model for mobility and migration patterns ._ nature _ * 484 * , 96 - 100 ( 2012 ) .barabsi , a. l. the origin of bursts and heavy tails in human dynamics ._ nature _ * 435 * , 207 - 211 ( 2005 ) .malmgren , r. d. , stouffer , d. b. , motter , a. e. , & amaral l. a. n. a poissonian explanation for heavy tails in e - mail communication ._ * 105 * , 18153 - 18158 ( 2008 ) .hidalgo , c. a. conditions for the emergence of scaling in the inter - event time of uncorrelated and seasonal systems ._ physica a _ * 369 * , 877 - 883 ( 2006 ) .wu , y. , zhou , c. , xiao , j. , kurths , j. & schellnhuber , h. j. evidence for a bimodal distribution in human communication ._ * 107 * , 18803 - 18808 ( 2010 ) .proekt , a. , banavar , j. r. , maritan , a. & pfaff , d. w. , scale invariance in the dynamics of spontaneous behavior , _ proc ._ * 109 * , 10564 - 10569 ( 2012 ) .mitzenmacher , m. a brief history of generative models for power law and lognormal distributions . _ internet math . _ * 1 * , 226 - 251 ( 2004 ) .newman , m. e. j. power laws , pareto distributions and zipf s law ._ contemp .phys . _ * 46 * , 323 - 351 ( 2005 ) .simkin , m. v. & roychowdhury , v. p. re - inventing willis ._ * 502 * , 1 - 35 ( 2011 ) .simon , h. a. on a class of skew distribution functions ._ biometrika _ * 42 * , 425 - 440 ( 1955 ) .barabsi , a. l. & albert , r. emergence of scaling in random networks ._ science _ * 286 * , 509 - 512 ( 1999 ) .l , l. , zhang , z .- k .& zhou , t. deviation of zipf s and heaps laws in human languages with limited dictionary sizes .* 3 * , 1082 ( 2013 ) .garlaschelli , d. , capocci , a. & caldarelli , g. self - organized network evolution coupled to extremal dynamics .phys . _ * 3 * , 813 - 817 ( 2007 ) .zhou , t. , medo , m. , cimini , g. , zhang , z .- k . & zhang , y .- c .emergence of scale - free leadership structure in social recommender systems ._ plos one _ * 6 * , e20648 ( 2011 ) .kim , b. j. , trusina , a. , minnhagen , p. & sneppen , k. self organized scale - free networks from merging and regeneration .j. b _ * 43 * , 369 - 372 ( 2005 ) .valverde , s. , cancho , f. & sol , r. v. scale - free networks from optimal design .lett . _ * 43 * , 369 - 372 ( 2002 ) .bartumeus , f. , da luz , m. g. e. , viswanathan , g. m. & catalan , j. animal search strategies : a quantitative random - walk analysis. _ ecology _ * 86 * , 3078 - 3087 ( 2005 ) .baiesi , m. & manna , s. scale - free networks from a hamiltonian dynamics .e _ * 68 * , 047103 ( 2003 ) .perotti , j. i. , billoni , o. v. , tamarit , f. a. , chialvo , d. r. & cannas , s. a. emergent self - organized complex network topology out of stability constraints .lett . _ * 103 * , 108701 ( 2009 ) .szell , m. , sinatra , r. , petri , g. , thurner , s. & latora , v. understanding mobility in a social petri dish .* 2 * , 457 ( 2012 ) .milojevi , s. power law distributions in information science : making the case for logarithmic binning .tec . _ * 61 * , 2417 - 2425 ( 2010 ) . | uncovering human mobility patterns is of fundamental importance to the understanding of epidemic spreading , urban transportation and other socioeconomic dynamics embodying spatiality and human travel . according to the direct travel diaries of volunteers , we show the absence of scaling properties in the displacement distribution at the individual level , while the aggregated displacement distribution follows a power law with an exponential cutoff . given the constraint on total travelling cost , this aggregated scaling law can be analytically predicted by the mixture nature of human travel under the principle of maximum entropy . a direct corollary of such theory is that the displacement distribution of a single mode of transportation should follow an exponential law , which also gets supportive evidences in known data . we thus conclude that the travelling cost shapes the displacement distribution at the aggregated level . positioning systems in mobile phones and vehicles and wi - fi devices in laptop computers and personal digital assistants have made quantitative analyses of human mobility patterns possible . these analyses have a significant potential to reveal novel statistical regularities of human behavior , refine our understanding of the socioeconomic dynamics embodying spatiality and human mobility , and eventually contribute to controlling disease , designing transportation systems , locating facilities , providing location - based services , and so on . aggregated data from bank notes , mobile phones and onboard gps measurements showed that the displacement distribution of human mobility , for both long - range travel and daily movements , approximately follows a power law . the scaling laws in long - range travel may result from the hierarchical organization of transportation systems , while the scaling laws in daily movements have recently been explained by the _ exploration and preferential return _ mechanism . thus far , we still lack solid results about human mobility patterns at the individual level . inferring individual features from the aggregated data is very risky because the scaling law for the population could be a mixture of many individuals with different statistics . in addition , the aforementioned data are not sufficient to draw conclusions at the individual level . first , data such as gps records from taxis and the trajectories of bank notes consist of many individual movements , but these individuals are not easy to be distinguished from each other . second , data such as gps records from mobile phones and the trajectories of bank notes could not accurately capture purposeful travels with explicit origins and destinations . in fact , the displacement between two activations of a mobile phone may be just a tiny portion of a purposeful trip or a combination of several sequential trips , while the displacement between two registrations of a bank note could be the result of a number of sequential trips made by different people . instead of using proxy data , we analyze the travel diaries of hundreds of volunteers . though the data set is small , it contains personal profiles and explicit positions of origins and destinations , allowing quantitative and authentic analyses at the individual level . in contrast to the scaling laws in aggregated data , individuals show diverse mobility patterns , and few of them display the scaling property . in fact , the trajectories of students and employees are dominated by trips connecting homes with schools and workplaces , respectively , while trips are distributed more homogeneously among different locations for others such as retirees , homemakers and unemployed people . the aggregated displacement distribution follows a power law with an exponential cutoff , which can be analytically explained by the mixed nature of human travel under the principle of maximum entropy . in addition , this theory predicts that the displacements using a single mode of transportation will follow an exponential distribution , which is also supported by the empirical data on taxi trips , car trips , bus trips and air flights . |
cloud radio access networks ( c - rans ) have received considerable research interest as one of the most promising solutions to mitigate interference , fulfil energy efficiency demands and support high - rate transmission in the fifth generation cellular network . in c - rans , a large number of remote radio heads ( rrhs ) are deployed , which operate as non - regenerative relays to forward received signals from mobile stations ( mss ) to the centralized base band unit ( bbu ) pool through wire / wireless backhaul links for uplink transmission . to suppress the inter - rrh interference by using cooperative processing techniques at the bbu pool ,the channel state information ( csi ) of both the radio access links ( als ) and wireless backhaul links ( bls ) are required . in , though a segment - training scheme was proposed to estimate the individual channel coefficients for two - hop scenario under flat fading environments , this proposal results in high overhead consumption for backhaul transmission since the rrhs need to forward both al and bl training sequences to the bbu pool . on the other hand, the superimposed - training scheme , where a training sequence is superimposed on the data signal , can significantly reduce the overhead and is valid to perform channel estimation for time - varying environments using complex - exponential basis expansion model ( ce - bem ) .however , straightforward implementation of superimposed training in c - rans would degrade transmission quality due to the fact that superimposing both al and bl training sequences on the data signal declines the effective signal - to - noise ratio ( snr ) . motivated to reduce the training overhead and enhance the channel estimation performance at the bbu pool , a superimposed - segment training design is proposed in this letter , where superimposed - training is implemented for the radio al while the segment - training is applied for the wireless bl .moreover , based on the training design , a ce - bem based maximum a posteriori probability ( map ) channel estimation algorithm is developed , where the bem coefficients of the time - varying radio al and the channel fading of the quasi - static wireless bl are first obtained , after which the time - domain channel samples of the radio al are restored in terms of maximizing the average effective snr ._ notation _ : the transpose , hermitian and inverse of a matrix are denoted by , and , respectively ; represents the two - norm of a vector ; defines the magnitude of a complex argument ; is the kronecker product ; denotes the diagonal matrix with being the diagonal element ; represents the trace of a matrix ; and are the unit diagonal matrix and zero matrix , respectively ; stands for the vector with each entry being unit value ; denotes the expectation of a random variable , and represents its estimate .consider a c - ran consisting of one bbu pool and multiple rrhs depicted in fig .the rrhs operate in half - duplex modes , and different mss served by the same rru are allocated with a single subcarrier through the orthogonal frequency division multiplexing access ( ofdma ) technique .it is assumed that mss move continuously , while rrus remain fixed .thus , the channels of radio als would vary during one transmission block , while those of wireless bls undergo quasi - static flat fading . due to the orthogonality characteristics of ofdma for the accessing of multiple mss , we can focus on the transmission of only a single ms .let and denote the data vector and cyclical training sequence transmitted from the ms , respectively . the training sequence from the rrh is denoted by .the -th channel sample of the time - varying radio al is denoted by with mean zero and variance , while the channel fading of the quasi - static flat bl is denoted by which has the complex gaussian distribution of mean zero and variance .the transmit power of the ms and rrh are denoted by and , respectively .the noise variance at the rrh and bbu pool is denoted by .it is assumed that the bbu pool acquires the knowledge of , , , , , , and . during each transmission block ,the ms transmits a signal of symbol length to the rru , in which the -th entry of is given by where denotes the -th entry of with m - ary phase shift keying ( mpsk ) modulation constrained by , and represents the -th entry of with whose period is denoted by .the is within . without loss of generality, we further assume that is an integer .the -th observation at the rrh is written as where is additive white gaussian noise ( awgn ) at the rrh .then the rrh scales the received signal by , and inserts prior to the received signal .the sequence is of length and its -th entry satisfies .the bbu pool receives two separate signals as where ^{t} ] and ^{t} ] is an dimensional matrix with . by defining of dimension, we can obtain _q_s= _ s , & , + 0 , & , where is an dimensional vector . left multiplying by yields whose ]-th entries denoted by can be expressed as it is testified that for with . although is not a gaussian vector , it is effective to use the gaussian distribution to model the noise behavior for estimation problems .thus , we choose the following function to be the nominal likelihood function of as defining ^{t} ] , thus we remove the last term for high snr approximation , i.e. , similarly , due to the non - linearity of the map estimation , it is hard to derive the corresponding mse expressions in closed forms .moreover , it is known that the map estimation mses converge to the cramr - rao bound ( crb ) when the training length is sufficiently large , and thus it is effective to use the crbs in the computation of the aesnr as substituting the above expressions for and into yields where \!\frac{\sigma_{n}^{2}}{p_{s}},\end{aligned}\ ] ] and .on setting , the optimization for transforms to clearly , the optimization problem described in constrained by is concave , and can be directly obtained from the lagrange dual function as substituting back to , the optimization problem becomes whose solution is leading to combining the estimation for and the restoration for s , the proposed channel estimation algorithm is summarized in table i. moreover , the following proposition is given to demonstrate the effectiveness of the proposed algorithm : the iterative channel estimation algorithm is convergent , and it achieves lower mse than that of the maximum likelihood ( ml ) method .each iteration consists of steps .denote the entry of by , and the updated estimate of denoted by satisfies .this indicates that strictly increases after each step as well as after one round of iteration .it is known that is continuous with respect to and .thus , it is concluded that the iterative algorithm is convergent . from ,the map estimate of with a given is whose mse is calculated as the ml estimate of gives whose mse is calculated as clearly , always holds , and it can be also testified that is satisfied similarly . _ remark _ : according to and , we see that and .this is because the proposed map estimation algorithm is biased since .p3.4in@ * * initialize * in accordance with ; i_index( ) * * repeat * * * for each , update by substituting into . ** update by substituting s into . * * i_index * * until * i_index is satisfied . ** calculate * the optimal according to . * * restore * s according to by using s and .* * return * s and .numerical results are provided to evaluate the performance of the ce - bem based map ( c - map ) channel estimation algorithm . the al channel and bl channel are generated from the spatial channel model ( csm ) in _ 3gpptr 25.996 _ .the parameters are set as , and .we assume binary - phase - shift - keying ( bpsk ) modulation for , while is selected as the column of the discrete fourier transform ( dft ) matrix and is as the column of the dft matrix .the transmit power and are set to be equal , and the noise variance is set to be of unit value ; thus the snr is equal to . in fig .[ fig2 ] , the average mses of both al and bl channels for c - map estimation are compared with that for ml estimation . it is observed that the proposed c - map estimation outperforms the traditional ml method since c - map achieves lower mses of both al and bl channels than ml . moreover , it is seen that the mse of the al channel for the ml method is not convergent .this is because random generation of would result in singularity , e.g. , , leading to for the ml method , while the proposed c - map algorithm is robust at the singularity . in fig.[fig3 ] , we evaluate the aesnr performance for the optimal weighted approach ( owa ) to channel restoration .it is seen that owa obtains higher aesnrs than the baseline ( restoring according to ce - bem ) , especially in the low snr region , and it draws near to the baseline as snr increases .a superimposed - segment training design has been proposed to decrease the training overhead and enhance the channel estimation accuracy in uplink c - rans . based on the training design ,a ce - bem based map channel estimation algorithm has been developed , where the bem coefficients of the time - varying radio al and the channel fading of the quasi - static wireless bl are first obtained , then the time - domain channel samples of al are restored from maximizing aesnr .simulation results have demonstrated that the proposed algorithm declines the estimation mse and increases the aesnr .s. zhang , f. gao , c. pei and x. he , segment training based individual channel estimation in one - way relay network with power allocation , _ ieee trans .wireless commun .3 , pp . 13001309 , mar .2013 .g. dou , c. he , c. li and j. gao , a weighted first - order statistical method for time - varying channel and dc - offset estimation using superimposed training , _ ieee commun .5 , pp . 852855 , may 2013 . | to decrease the training overhead and improve the channel estimation accuracy under time - varying environments in uplink cloud radio access networks ( c - rans ) , a superimposed - segment training design is proposed whose core idea is that each mobile station puts a periodic training sequence on the top of the data signal , and remote radio heads ( rrhs ) insert a separate pilot prior to the received signal before forwarding to the centralized base band unit ( bbu ) pool . moreover , a complex - exponential basis - expansion - model ( ce - bem ) based maximum a posteriori probability ( map ) channel estimation algorithm is developed , where the bem coefficients of access links ( als ) and the channel fading of wireless backhaul links ( bls ) are first obtained , after which the time - domain channel samples of als are restored in terms of maximizing the average effective signal - to - noise ratio ( aesnr ) . simulation results show that the proposed channel estimation algorithm can effectively decrease the estimation mean square error and increase the aesnr in c - rans , thus significantly outperforming the existing solutions . channel estimation , cloud radio access networks , time - varying environment . |
regular and noninvasive measurement of vital signs such as pulse rate ( pr ) , breathing rate ( br ) , pulse rate variability ( prv ) , blood oxygen level ( spo2 ) and blood pressure ( bp ) are important both in - hospital and at - home due to their fundamental role in the diagnosis of health conditions and monitoring of well - being . currently , the gold standard techniques to measure the vital signs are based on contact sensors such as ecg probes , chest straps , pulse oximeters and blood pressure cuffs .however , contact - based sensors are not convenient in all scenarios , e.g. contact sensors are known to cause skin damage in pre - mature babies during their treatment in a neonatal intensive care unit ( nicu ) .non - contact methods for vital sign monitoring using a camera has been recently shown to be feasible .being non - contact , camera - based vital sign monitoring have many applications from monitoring newborn babies in the nicu to in - situ continuous monitoring in everyday scenarios like working in front of a computer .however , camera - based vital sign monitoring does not perform well for subjects having darker skin tones and/or under low lighting conditions as was highlighted in .furthermore , current known algorithms require a person to be nearly at rest facing a camera to ensure reliable measurements . in this paper , we address the challenge of reliable vital sign estimation for people having darker skin tones , under low lighting conditions and under different natural motion scenarios to expand the scope of camera - based vital sign monitoring .photoplethysmography ( ppg ) is an optical method to measure cardiac - synchronous blood volume change in body extremities such as the face , finger and earlobe . as the heart pumps blood , the volume of blood in the arteries and capillaries changes by a small amount in sync with the cardiac cycle .the change in blood volume in the arteries and capillaries underneath the skin leads to small change in the skin color .the goal of a camera - based vital sign monitoring system is to estimate the ppg waveform which is proportional to these small changes in skin color .vital signs such as pr , prv , spo2 and br can be derived from a well - acquired ppg waveform . the two major challenges in estimating ppg using a camera are : ( i ) extremely low signal strength of the color - change signal , particularly for darker skin tones and/or under low lighting conditions , and ( ii ) motion artifacts due to an individual s movement in front of the camera .our main contribution in this paper is a new algorithm , labeled as _ distanceppg _ , that improves the signal strength of camera - based ppg signal estimate , with following three key contributions . * a new method to improve the snr of camera - based ppg signal by combining the color - change signals obtained from different regions of the face using a weighted average . * a new automatic method for determining the weights based only on the video recording of the subject .the weights capture the effect of incident light intensity and blood perfusion underneath the skin on the strength of color - change signal obtained from a region . * a method to track different regions of the face separately as the person moves in front of the camera using a combination of a deformable face tracker and klt ( kanade - lucas - tomasi ) feature tracker to extract the ppg waveform under motion . for different skin tones ( pale white to brown ) ,the distanceppg algorithm improves the signal to noise ratio ( snr ) of the estimated ppg signal on an average by db compared to prior methods . particularly , the improvement in snr for non - white skin tone is db . we have evaluated ppg estimation under three natural motion scenarios : ( i ) reading content on screen , ( ii ) watching video , and ( iii ) talking .distanceppg improves the snr of estimated camera ppg in these scenario by db on an average .further , it improves the snr of camera based ppg by as much as db under low lighting condition when compared to prior methods .the improvement in snr of camera - based ppg signal estimated using distanceppg reduces the error in pulse rate estimates in various scenarios . using distanceppg ,the mean bias ( average difference between ground truth pulse oximeter derived pr and camera - based ppg derived pr ) is beats per minute ( bpm ) with limit of agreement ( mean bias standard deviation of the difference ) between to bpm for subjects having skin tones ranging from fair white to brown / black .when using prior methods , the corresponding performance numbers are = with limit of agreement between to . under three motion scenario of reading , watching , and talking for subjects of varying skin tones , the mean deviation is = bpm with limit of agreement between to bpm using distanceppg . using prior methods , the corresponding performance numbers are bpm with limit of agreement between to bpm .further , distanceppg reduces the root mean square error ( rmse ) in pulse rate variability estimation below ms for non - black skin tones using a fps camera . over the past decade , several researchers have worked on measuring vital signs such as pr , prv , br , and spo2 using a camera .initially , external arrays of leds at nm and nm were used to illuminate a region of tissue for measuring pr using a monochrome cmos camera .it was shown later that pr and br can also be determined using simply a color camera and ambient illumination .the authors in found that the face is the best region to extract ppg signal because of better signal strength .they also reported that the _ green channel _ of the rgb camera perform better than red and blue channel for detecting pr and br .the fact that the green channel perform better is expected as the absorption spectra of hemoglobin ( hb ) and oxyhemoglobin ( hbo ) , two main constituent chromophores in blood , peaks in the region around nm , which is essentially the passband range of the green filters in color cameras .further in , the authors used a color webcam under ambient illumination to detect simultaneous pr , prv and br of multiple people in a video by using automatic face detection to define the face region .they used blind source separation ( bss ) to decompose the three camera channels ( red , green , blue ) into three independent source signals using independent component analysis ( ica ) algorithm , and extracted the ppg signal from one of the independent sources .more recently , authors in demonstrated that cyan , orange and green ( cog ) channels work better than red , green , and blue ( rgb ) channels of a camera for camera - based vital sign estimation .one possible explanation for better performance of cyan - orange - green ( cog ) channels could be the higher overlap between the passband of cyan ( nm ) , green ( nm ) , and orange ( nm ) with the peak in the absorption spectra of hb and hbo ( nm ) .most of the past work , however , did not report how camera - based ppg performs on individuals with different skin tones , under low lighting conditions , and for different motion scenarios .it is well - known that the higher amount of _ melanin _ present in darker skin tones absorbs a significant amount of incident light , and thus degrades the quality of the camera - based ppg signal , making the system ineffective for extracting vital signs in darker skin tones .recently , a pilot study conducted in nicu for monitoring pulse rate using a camera - based method showed difficulty under low lighting conditions and/or under infant motion . to counter the motion challenge , the authors in used automatic face detection in consecutive frames to track the face .but , they reported difficulties in continuously detecting faces under motion due to large false - negatives .another method is to compute 2d shifts in face location between consecutive frames using image correlation to model the motion . by simply computing the global 2d shifts, one can only capture the basic translation motion of face , and it is difficult to compensate for more natural motion like turning or tilting of the face , smiling or talking , generally found in in - situ scenarios .for a camera - based ppg estimation we record the video of a person facing a camera , and the objective is to develop an algorithm to estimate the underlying ppg signal using the recorded video .the recorded video is in the form of intensity signal comprising of sequences of frames . each frame of the video records the intensity level of the light reflected back from the face over a two dimensional grid of pixel in the camera sensor .if the camera sensor has multiple color channels ( e.g. red , green , blue ) , one can get separate intensity signals corresponding to each channel ( e.g. ) .in general , the measured intensity of any reflected light can be decomposed into two components : ( i ) intensity of illumination , and ( ii ) reflectance of the surface ( skin ) , i.e. the illumination intensity corresponds to the intensity of ambient or any dedicated light falling on the face . for ppg estimation , it is generally assumed that the light intensity remains same over the ppg estimation window ( typically sec in past works ) .the skin reflectance is equal to the fraction of light reflected back from the skin and consists of two level of light reflectance : ( i ) surface reflection , and ( ii ) subsurface reflection or backscattering . a large part of the light incident on face gets reflected back from the surface of the skin , and is characterized by the skin s bidirectional reflectance distribution function ( brdf ) .remaining part of the incident light goes underneath the skin surface and is absorbed by the tissue and the chromophores ( hb , hbo ) present in blood inside arteries and capillaries .the volume of blood in the arteries and capillaries changes with each cardiac cycle and thus the level of absorption of light changes as well .since ppg signal , by definition , is proportional to this cardio - synchronous pulsatile blood volume change in the tissue , one can estimate ppg signal by estimating these small changes in subsurface light absorption .thus , the camera - based ppg signal is estimated by extracting small variations in the subsurface component of skin reflectance .since the incident light intensity is assumed to be constant over ppg estimation time window , any temporal change in the intensity of the light reflected back from the face region will be proportional to the changes in the reflectance of the skin surface . generally , these temporal changes in recorded intensity will be dominated by changes in surface reflection component unrelated to the underlying ppg signal of interest . as a first step for camera - based ppg estimation , one can spatially average the recorded intensity level over all the pixels in the face region to yield a measurement point per frame in the video .the basic idea is that by averaging the intensity signal , the incoherent changes in surface reflection component over all the pixels inside the face will cancel out , and the coherent changes in subsurface reflection component due to blood volume changes will add up to give an estimate .the spatially averaged intensity signal would be proportional to the changes in the subsurface reflectance component , and thus to the underlying ppg signal of interest .one generally filters signal between hz to hz ( frequency band of interest ) to extract the ppg signal .* challenge 1 : very low signal strength * : ppg signal extracted from camera video have low signal strength .this is because the skin vascular bed contains a very small amount of blood ( only of total blood in the body ) , and the blood volume itself experiences only a small ( ) change in sync with the cardiovascular pulse .thus , the change in subsurface skin reflectance due to cardio - synchronous changes in blood volume is very small . the small change in subsurface reflectance results in very small change in light intensity recorded using a camera placed at a distance . on the other hand ,the change in surface reflection component due to small movement of person is very large .for example , see figure [ fig : roi ] where the top plot ( red ) shows recorded intensity changes in a single pixel marked on the forehead ( ) . when compared to the ground truth pulse oximeter signal ( bottom , ) , it is clear that the raw intensity change variations in are unrelated to the underlying ppg signal and is dominated by a significant amount of surface reflectance changes to estimate small changes in subsurface reflectance , a general idea used in most past work is to spatially average the recorded intensity level over the face region . for example , see the plot of in figure [ fig : roi ] . clearly , is a better estimate of the ppg signal as is evident by comparing it with the ground truth pulse oximeter signal ( bottom ) .the amplitude of is within in camera intensity scale ( all these signals are filtered between ] to reject the out of band component of skin surface reflection ( ) and other noise outside the band of interest to obtain . based on the camera - based ppg signal acquisition model in section [ sec :ppg_acquisition_model ] , the filtered signals from different regions of the face can be written as where represents the corresponding roi number in . here , denote the strength of the underlying ppg signal in region and is determined both by the strength of modulation and the incident light illumination .further , denote the noise component due to the camera quantization , unfiltered surface reflection and motion artifacts . here , can be considered as different channels that receive different strength of the same desired signal , and have different level of noise .we can combine all these different channels using a weighted average the weights for each channel can be determined based on the the idea of maximum ratio diversity .the maximum ratio diversity algorithm states that the assigned weights should be proportional to the root - mean - squared ( rms ) value of the signal component , and inversely proportional to the mean - squared noise in that channel , in order to maximize the signal - to - noise ratio of the overall camera - based ppg estimate ; mathematically , as both and are unknown in our case , we need to develop an alternative method to estimate these weights . for ease of reference , we label as _ goodness metric _ for region from hereon .the maximum ratio diversity algorithm assumes that the signal component is locally coherent among all channel , i.e. there is no time delay between ppg signals extracted from different rois , and the noise component is uncorrelated. generally speaking , ppg signal obtained from different regions of the skin would exhibit varying delays as the blood reaches these regions at different times .for example , there is a time lag of ms in ppg recorded between finger and toe .however , the time lag between the ppg signal obtained from close - by regions are small , e.g. all the regions inside the face .our own measurements show that the delay is less than ms between farthest point in face . as this delay falls within one sample period of normal cameras having frame rate of hz, we can neglect such small delays for all practical purposes , and thus the signal component can be considered as locally coherent . as the amplitude of the camera - based ppg signal of interest is generally very small ( within 1 - 2 bits of the camera adc ), we also reject regions which have unusually large signals .large variations are mostly due to illumination change or motion artifacts .thus , we reject all regions having amplitude greater than a threshold . our final estimate of the ppg signal over a time window of sec is given by where is the indicator function , } \hat{y}_i(t) ] is the maximum and minimum amplitude of the over a sec duration .the ppg signal has a fundamental frequency of oscillation equal to the pulse rate .thus , the spectral power of the ppg signal is concentrated in a small frequency band around the pr . moreover, the spectral power of the noise present in will be distributed over the passband of the filter ] .then goodness metric can be defined as where ] is the passband of the bandpass filter ( ] .we then reject those roi where the signal amplitude crosses the amplitude threshold .camera - based ppg signals are really weak , and hence they hardly cross value in the units of -bit camera pixel intensity . any large change in intensity is mostly due to change in illumination or large motion artifact as discussed earlier .we then compute a coarse estimate of the pulse rate , by combining the ppg signals from all the remaining rois .this coarse pulse rate estimate can be erroneous at times , and so we keep track of the history of pulse rates over last epochs .if the current estimate of is off by more than , then we replace the current estimate with the median of the last four estimate .we then compute the goodness metric for all the remaining roi ( all regions rejected prior to this stage are given a weight for the current epoch ) and combine the ppg signals using the weighted average ( equation ) .we recompute the goodness metric for each roi after every epoch .for comparison , we implemented known past methods for camera - based ppg estimation .the general steps taken are : ( i ) select the face regions using voila jones face detector as described in , ( ii ) extract ppg signal by first computing the spatial average of the pixel intensity within the selected face region , and then filtering ( detrending ) the estimate , ( iii ) to compensate for motion , compute the 2d shift in face between consecutive frames as described in and extracted the ppg from the tracked face region . in this sense, we have used a combination of known methods for single channel ( green ) camera - based ppg estimation algorithm for comparison with our distanceppg algorithm .we label this combination as _ face averaging method _ from now onwards .another set of work in camera - based ppg estimation involves decomposing different camera channels ( e.g. red , green , blue ) into independent source signals using independent component analysis ( ica ) , and extracting the desired ppg signal from one of these independent sources .our proposed distanceppg algorithm provides improvement in camera - based ppg estimate by spatially combining ppg signals from different regions of the face , and by improving upon the tracking algorithm , and uses only a single channel ( green ) of the camera . on the other hand ,ica based methods utilizes multiple camera channels ( e.g. red , green , and blue or cyan , orange , and green ) to improve the performance of camera - based ppg estimate by separating independent sources .thus , these two methods are characteristically different , and we will summarize the performance improvement provided by ica - based method and by distanceppg . for all single channel video recording in this study , we used flea 3^^ usb 3.0 fl - u3 - 13e4m - c monochrome camera operated at frames per second , with a resolution of , and bits per pixel .we added a nm full - width half - max ( fwhm ) green filter ( fb550 - 40 from thor labs^^ ) in front of the monochrome camera .we selected green filter since the absorption spectra of hb and hbo2 peaks in this wavelength region .moreover , all commercial color camera have highest number of pixel having green filter ( bayer pattern ) . for color video recording, we used flea 3^^ usb 3.0 fl - u3 - 13e4c - c color camera operated at frames per second with rgb bayer pattern having a total resolution of , and bits per pixel .we used texas instruments afe4490spo2evm pulse oximeter module to record contact - based ppg signal for comparison .it operates at a sampling rate of hz .the distance between camera and subject was m .both the camera system and the pulse - oximeter were started simultaneously , and the data is recorded in a pc workstation . all processing is done using a custom built software written in matlab . all experiments ( except the lighting experiment ) is conducted under ambient ( fluorescent ) lighting at 500 lux brightness .the main goal of the experiments reported here is to characterize and quantify the performance of the two components of our proposed distanceppg algorithm mrc algorithm and region based motion tracking algorithm , and compare them with face averaging method .we evaluate the performance by varying three main parameters of interest : ( i ) skin tone of people , ( ii ) motion , ( iii ) ambient light intensity , and quantify their effect on ppg estimation accuracy .we use the same raw video feed to evaluate our distanceppg algorithm in comparison to face averaging method , and so performance improvements reported here are due to the proposed algorithm , and is independent of any specific hardware or camera choices we made for the evaluation .all the experiments done in this research were approved by the rice university institute review board ( protocol number : , approval date : ) .for the first experiment , we collected single channel ( green ) video data from subjects ( male , female ) with different skin tones ( from light , pale white to dark brown / black ) . for this experiment ,subjects were asked to face the camera and be static for a duration of seconds ( involuntary motions were not restricted ) . for the second set of experiments, we collected ( single green channel ) video under three natural motion scenarios ( i ) reading on computer , ( ii ) watching video , ( iii ) talking .the motion scenarios are representative of a general class of motion exhibit by users of tablet , phone , or laptops while facing the screen .reading scenario involves lateral movement of the head while reading text on screen .watching video also involves intermittent facial expression such as smiling , getting amazed , sad etc , apart from lateral movement of the head .talking scenario involves a lot of non - rigid movement around the jaw and the cheeks , and thus are ideal to evaluate system perform in harsh scenarios . for each of the motion scenarios , we collected seconds video recordings for subjects having different skin tones , along with their stationary video recording for baseline comparison .for the third set of experiments , we varied the illumination level from lux upto lux ( ambient light is around lux ) and recorded the single channel ( green ) video for a duration of seconds under each lighting conditions for two subjects having pale - white and brown skin tones . for comparison with ica - based method , we collected two sets of data : ( i ) static dataset comprising of color video ( red , green , blue ) of subjects of varying skin tones ( only non - caucasian ) for a duration of seconds at lux illumination , ( ii ) talking dataset comprising of color video of subjects of varying skin tone ( non - caucasian ) , with subjects having lux illumination , and subject having lux illumination .this set is deliberately chosen to be extremely harsh ( lower light , non - caucasian skin tones , and large motion during talking ) to highlight scenario where current known algorithms ( including distanceppg ) fails , and provide dataset where future algorithms can improve .the dataset will be released in public and other researchers can access it at http://www.ece.rice.edu/\texttildelow mk28/distanceppg / dataset/[http://www.ece.rice.edu/\texttildelow mk28/distanceppg / dataset/ ] . as a waveform estimation algorithm ,distanceppg provides an estimate of the ground truth ppg waveform using video of a person s face .thus , we use signal - to - noise ratio ( snr ) of the estimate to quantify performance for comparison .the ground truth signal , , is recorded using a pulse oximeter attached to subject s ear .we chose earlobe instead of finger probe because of its proximity to the face region .further , we also evaluate the performance based on the accuracy of physiological parameter like pulse rate and pulse rate variability ( i.e. beat - to - beat changes in pulse interval ) that can be extracted from ppg waveform .we report the mean error rates in estimating pr and prv using our distanceppg algorithm and using face averaging method under different experimental scenarios .to define the snr of the estimated ppg waveform , we used standard ppg signal obtained from pulse oximeter connected to a person s earlobe as our ground truth signal .the ppg waveform obtained from a contact pulse oximeter would also exhibit error due to motion and ambient light artifacts , and in that sense our choice of ground truth is a best effort choice .nonetheless , the noise present in ppg waveform acquired using a contact pulse oximeter is orders of magnitude smaller than that obtained using a camera - based system , so our estimate of snr would still be reasonably accurate .the amplitude of the ppg signal recorded by pulse oximeter is unrelated to the amplitude of the camera - based ppg signal as both systems have completely independent sensor architecture and analog gain stage .so , here we have developed a definition of snr which is independent of exact amplitude of these waveform .our snr metric captures how similar camera - based ppg signal is to the pulse oximeter based ground - truth ppg signal .let denote the ppg acquired using pulse oximeter .let denote the ppg estimated from a camera - based system ( from previous section ) . here , is the noise in the ppg signal acquired from camera .apart from the quantization noise , also includes uncompensated motion artifacts .the noise present in the pulse oximeter system is denoted as .let us assume that all signals are defined in the time interval ] hz ) , a method generally used for weak periodic signal detection , would not give a good estimate of the ppg waveform , as the spectral band of interest for ppg signal is much wider ( ] ) .the small changes in skin subsurface reflectance , which encodes the ppg signal , is not recoverable in presence of large in - band surface reflectance changes .thus , the general approach we adopted in distanceppg is to reject regions undergoing large surface reflectance changes .consequently , during large motion we end up rejecting a majority of face region and thus our estimate of ppg signal was inaccurate .two popular apps for measuring non - contact pulse rate using the color change ( or ppg ) signal from a person s face are philips vital sign camera and cardiio .both these apps require users to be at rest facing the camera in a well lit environment to be effective .the distanceppg algorithm discussed in this paper address these challenges and thus would extend the use case of mobile phone and computer apps for vital sign monitoring .we are in the process of developing a realtime pc - based application to robustly estimate ppg signal using a webcam .our future work includes porting our code to popular mobile platforms ( android / ios ) , and further improving the performance of distanceppg under motion scenarios .this work was partially supported by nsf cns 1126478 , nsf iis-1116718 , rice university graduate fellowship , texas instruments fellowship , and texas higher education coordinating board : thecb - nharp 13308 | vital signs such as pulse rate and breathing rate are currently measured using contact probes . but , non - contact methods for measuring vital signs are desirable both in hospital settings ( e.g. in nicu ) and for ubiquitous in - situ health tracking ( e.g. on mobile phone and computers with webcams ) . recently , camera - based non - contact vital sign monitoring have been shown to be feasible . however , camera - based vital sign monitoring is challenging for people with darker skin tone , under low lighting conditions , and/or during movement of an individual in front of the camera . in this paper , we propose distanceppg , a new camera - based vital sign estimation algorithm which addresses these challenges . distanceppg proposes a new method of combining skin - color change signals from different tracked regions of the face using a weighted average , where the weights depend on the blood perfusion and incident light intensity in the region , to improve the signal - to - noise ratio ( snr ) of camera - based estimate . one of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject . the gains in snr of camera - based ppg estimated using distanceppg translate into reduction of the error in vital sign estimation , and thus expand the scope of camera - based vital sign monitoring to potentially challenging scenarios . further , a dataset will be released , comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones , under different lighting conditions and for various motion scenarios . 10 w. verkruysse , l. o. svaasand , and j. s. nelson , `` remote plethysmographic imaging using ambient light , '' optics express * 16 * , 2143421445 ( 2008 ) . m .- z . poh , d. mcduff , and r. picard , `` advancements in noncontact , multiparameter physiological measurements using a webcam , '' ieee transactions on biomedical engineering * 58 * , 711 ( 2011 ) . y. sun , s. hu , v. azorin - peris , s. greenwald , j. chambers , and y. zhu , `` motion - compensated noncontact imaging photoplethysmography to monitor cardiorespiratory status during exercise , '' journal of biomedical optics * 16 * , 077010 ( 2011 ) . l. a. m. aarts , v. jeanne , j. p. cleary , c. lieber , j. s. nelson , s. bambang oetomo , and w. verkruysse , `` non - contact heart rate monitoring utilizing camera photoplethysmography in the neonatal intensive care unit - a pilot study , '' early human development * 89 * , 943948 ( 2013 ) . j. m. saragih , s. lucey , and j. f. cohn , `` deformable model fitting by regularized landmark mean - shift , '' international journal of computer vision * 91 * , 200215 ( 2011 ) . b. d. lucas and t. kanade , `` an iterative image registration technique with an application to stereo vision , '' ( 1981 ) , pp . 674679 . c. tomasi and t. kanade , `` detection and tracking of point features , '' technical report mu - cs-91 - 132 , carnegie mellon university ( 1991 ) . f. p. wieringa , f. mastik , and a. f. w. van der steen , `` contactless multiple wavelength photoplethysmographic imaging : a first step toward `` spo2 camera '' technology , '' annals of biomedical engineering * 33 * , 10341041 ( 2005 ) . k. humphreys , t. ward , and c. markham , `` noncontact simultaneous dual wavelength photoplethysmography : a further step toward noncontact pulse oximetry , '' the review of scientific instruments * 78 * , 044304 ( 2007 ) . m .- z . poh , d. j. mcduff , and r. w. picard , `` non - contact , automated cardiac pulse measurements using video imaging and blind source separation , '' optics express * 18 * , 1076210774 ( 2010 ) . b. d. holton , k. mannapperuma , p. j. lesniewski , and j. c. thomas , `` signal recovery in imaging photoplethysmography , '' physiological measurement * 34 * , 14991511 ( 2013 ) . d. mcduff , s. gontarek , and r. picard , `` improvements in remote cardiopulmonary measurement using a five band digital camera , '' ieee transactions on biomedical engineering * 61 * , 25932601 ( 2014 ) . j. allen , `` photoplethysmography and its application in clinical physiological measurement , '' physiological measurement * 28 * , r1 ( 2007 ) . s. hu , v. azorin - peris , and j. zheng , `` opto - physiological modeling applied to photoplethysmographic cardiovascular assessment , '' journal of healthcare engineering * 4 * , 505528 ( 2013 ) . d. brennan , `` linear diversity combining techniques , '' proceedings of the ieee * 91 * , 331356 ( 2003 ) . m. nitzan , b. khanokh , and y. slovik , `` the difference in pulse transit time to the toe and finger measured by photoplethysmography , '' physiological measurement * 23 * , 8593 ( 2002 ) . j. shi and c. tomasi , `` good features to track , '' in `` , 1994 ieee computer society conference on computer vision and pattern recognition , 1994 . proceedings cvpr 94 , '' ( 1994 ) , pp . 593600 . z. kalal , k. mikolajczyk , and j. matas , `` forward - backward error : automatic detection of tracking failures , '' in `` 2010 20th international conference on pattern recognition ( icpr ) , '' ( 2010 ) , pp . 27562759 . m. a. fischler and r. c. bolles , `` random sample consensus : a paradigm for model fitting with applications to image analysis and automated cartography , '' commun . acm * 24 * , 381395 ( 1981 ) . m. fernandez , k. burns , b. calhoun , s. george , b. martin , and c. weaver , `` evaluation of a new pulse oximeter sensor , '' american journal of critical care : an official publication , american association of critical - care nurses * 16 * , 146152 ( 2007 ) . j. a. j. heathers , `` smartphone - enabled pulse rate variability : an alternative methodology for the collection of heart rate variability in psychophysiological research , '' international journal of psychophysiology : official journal of the international organization of psychophysiology * 89 * , 297304 ( 2013 ) . philips , `` philips vital signs camera , '' http://www.vitalsignscamera.com/. cardiio , `` cardiio , '' http://www.cardiio.com/. |
the power systems are undergoing drastic changes on both the supply side and the demand side . on the supply side , an increasing amount of renewable energy ( e.g. wind energy , solar energy ) is penetrating the power systems .the adoption of renewable energy reduces the environmental damage caused by conventional energy generation , but also introduces high fluctuation and uncertainty in energy generation on the supply side . to cope with this uncertainty in energy generation , the demand side is deploying various solutions ,one of which is the use of energy storage . in this paper, we study the optimal demand side management ( dsm ) strategy in the presence of energy storage , and the corresponding optimal economic dispatch strategy .specifically , we consider a power system consisting of energy generators on the supply side , an independent system operator ( iso ) that operates the system , and multiple aggregators and their customers on the demand side . on the supply side ,the iso receives energy purchase requests from the aggregators as well as reports of ( parameterized ) energy generation cost functions from the generators , and based on these , dispatches the energy generators and determines the unit energy prices . on the demand side ,the aggregators are located in different geographical areas and provide energy for residential customers ( e.g. households ) or for commercial customers ( e.g. an office building ) in the neighborhood . in the literature , the term `` dsm '' has been broadly used for different decision problems on the demand side . for example , some papers ( see for representative papers ) focus on the interaction between one aggregator and its customers , and refer to dsm as determining the power consumption schedules of the users .some papers focus on how multiple aggregators or a single aggregator purchases energy from the iso based on the energy consumption requests from their customers .our paper pertains to the second category of research works .the key feature that sets apart our paper from most existing works is that all the decision makers in the system are _foresighted_. each aggregator seeks to minimize its _ long - term _ cost , consisting of its operational cost of energy storage and its payment for energy purchase .in contrast , in most existing works , the aggregators are _ myopic _ and seek to minimizing their _ short - term _( e.g. one - day or even hourly ) cost . in the presence of energy storage ,foresighted dsm strategies can achieve much lower costs than myopic dsm strategies because the current decisions of the aggregators will affect their future costs .for example , an aggregator can purchase more energy from the iso than that requested from its customers , and store the unused energy in the energy storage for future use , if it anticipates that the future energy price will be high .hence , the current purchase from the aggregators will affect how much they will purchase in the future . in this case, it is optimal for the entities to make _ foresighted _ decisions , taking into account the impact of their current decisions on the future .since the aggregators deploy foresighted dsm strategies , it is also optimal for the iso to make foresighted economic dispatch , in order to minimize the _ long - term _ total cost of the system , consisting of the long - term cost of energy generation and the aggregators long - term operational cost .note that although some works assume that the aggregator is foresighted , they study the decision problem of a _ single _ aggregator and do not consider the economic dispatch problem of the iso .when there are multiple aggregators in the system ( which is the case in practice ) , this approach neglects the impact of aggregators decisions on each other , which leads to suboptimal solutions in terms of minimizing the total cost of the system . when the iso and _ multiple _ aggregators make _ foresighted _ decisions , it is difficult to obtain the optimal foresighted strategies for two reasons. first , the information is decentralized .the total cost depends on the generation cost functions ( e.g. the speed of wind for wind energy generation , the amount of sunshine for solar energy generation , and so on ) , the status of the transmission lines ( e.g. the flow capacity of the transmission lines ) , the amount of electricity in the energy storage , and the demand from the customers , all of which may change due to supply and demand uncertainty .however , none of the entities knows all the above information : the iso knows only the generation cost functions and the status of the transmission lines , and each aggregator knows only the status of its own energy storage and the demand from its own customers .hence , the dsm strategy needs to be decentralized , such that each entity can make decisions solely based on its locally - available information .second , the aggregators are coupled in a complicated way that is unknown to them .specifically , each aggregator s purchase affects the prices , and thus the payments of the other aggregators .however , the price is determined by the iso based on the generation cost functions and the status of the transmission lines , neither of which is known to any aggregator .hence , each aggregator does not know how its purchase will influence the price , which makes it difficult for the aggregator to make the optimal decision . to overcome the difficulty resulting from information decentralization and complicated coupling , we propose a decentralized dsm strategy based on conjectured prices . specifically , each aggregator makes decisions based on its conjectured price , and its local information on the status of its energy storage and the demand from its customers . in other words, each aggregator summarizes all the unavailable information into its conjectured price .note , however , that the price is determined based on the generation cost functions and the status of the transmission lines , which is only known to the iso .hence , the aggregators conjectured prices are determined by the iso .we propose a simple online algorithm for the iso to update the conjectured prices based on its local information , and prove that by using the algorithm , the iso obtains the optimal conjectured prices under which the aggregators ( foresighted ) best responses minimize the total cost of the system ..comparisons with related works on demand - side management . [ cols="^,^,^,^,^,^,^",options="header " , ]in this paper , we proposed a methodology to perform optimal foresighted dsm strategies that minimize the long - term total cost of the power system .we overcame the hurdles of information decentralization and complicated coupling in the system , by decoupling the entities decision problems using conjectured prices .we proposed an online algorithm for the iso to update the conjectured prices , such that the conjectured prices can converge to the optimal ones , based on which the entities make optimal decisions that minimize the long - term total cost .we prove that the proposed method can achieve the social optimum , and demonstrate through simulations that the proposed foresighted dsm significantly reduces the total cost compared to the optimal myopic dsm ( up to 60% reduction ) , and the foresighted dsm based on the lyapunov optimization framework ( up to 30% reduction ) .due to limited space , we give a detailed proof sketch .the proof consists of three key steps .first , we prove that by penalizing the constraints into the objective function , the decision problems of different entities can be decentralized .hence , we can derive optimal decentralized strategies for different entities under given lagrangian multipliers .then we prove that the update of lagrangian multipliers converges to the optimal ones under which there is no duality gap between the primal problem and the dual problem , due to the convexity assumptions made on the cost functions .finally , we validate the calculation of the conjectured prices .first , suppose that there is a central controller that knows everything about the system .then the optimal strategy to the design problem should result in a value function that satisfies the following bellman equation : for all , we have defining a lagrangian multiplier associated with the constraints , and penalizing the constraints on the objective function , we get the following bellman equation : \right . \\ & & \left. + \delta \cdot \sum_{s_0^\prime,\bm{s}^\prime } \rho(s_0^\prime,\bm{s}^\prime|s_0,\bm{s},a_0,\bm{a } ) v^{\bm{\lambda}}(s_0^\prime,\bm{s}^\prime ) \right\}. \nonumber\end{aligned}\ ] ] in the following lemma , we can prove that can be decomposed .[ lemma : decompositionvaluefunction ] the optimal value function that solves can be decomposed as for all , where can be computed by entity locally by solving + \delta \cdot \sum_{s_i^\prime } \rho_i(s_i^\prime|s_i , a_i ) v_i^{\bm{\lambda}}(s_i^\prime ) \right\ } \end{array}.\end{aligned}\ ] ] this can be proved by the independence of different entities states and by the decomposition of the constraints .specifically , in a dc power flow model , the constraints are linear with respect to the actions . as a result, we can decompose the constraints as .we have proved that by penalizing the constraints using lagrangian multiplier , different entities can compute the optimal value function distributively .due to the convexity assumptions on the cost functions , we can show that the primal problem is convex .hence , there is no duality gap . in other words , at the optimal lagrangian multipliers , the corresponding value function is equal to the optimal value function of the primal problem .it is left to show that the update of lagrangian multipliers converge to the optimal ones .it is a well - known result in dynamic programming that is convex and piecewise linear in , and that the subgradient is .note that we use the sample mean of and , whose expectation is the true mean value of and . since is linear in and , the subgradient calculated based on the sample mean has the same mean value as the subgradient calculated based on the true mean values . in other words , the update is a stochastic subgradient descent method .it is well - known that when the stepsize , the stochastic subgradient descent will converge to the optimal .h. mohsenian - rad , v. w. s. wong , j. jatskevich , r. schober , and a. leon - garcia , `` autonomous demand - side management based on game - theoretic energy consumption scheduling for the future smart grid , '' _ ieee trans .smart grid _ , vol . 1 , no320331 , 2011 .kim , s. ren , m. van der schaar , and j .- w .lee , `` bidirectional energy trading and residential load scheduling with electric vehicles in the smart grid , '' _ ieee j. sel .areas commun ., special issue on smart grid communications series _ , vol .31 , no . 7 , pp. 12191234 , jul .2013 .italo atzeni , luis g. ordez , gesualdo scutari , daniel p. palomar , and javier r. fonollosa , `` noncooperative and cooperative optimization of distributed energy generation and storage in the demand - side of the smart grid , '' _ ieee trans . on signal process .24542472 , may 2013 .italo atzeni , luis g. ordez , gesualdo scutari , daniel p. palomar , and javier r. fonollosa , `` demand - side management via distributed energy generation and storage optimization , '' _ ieee trans . on smart grids _ ,2 , pp . 866876 , june 2013 . e. altmam , k. avrachenkov , n. bonneau , m. debbah , r. el - azouzi , d. s. menasche , `` constrained cost - coupled stochastic games with independent state processes , '' _technical report_. available : http://www-sop.inria.fr/members/konstantin.avratchenkov/pubs/constrgame.pdf j. hrner , t. sugaya , s. takahashi , and n. vielle , `` recursive methods in discounted stochastic games : an algorithm for and a folk theorem , '' _ econometrica _ , vol .79 , no . 4 , pp .12771318 , 2011 . | we consider a smart grid with an independent system operator ( iso ) , and distributed aggregators who have energy storage and purchase energy from the iso to serve its customers . all the entities in the system are _ foresighted _ : each aggregator seeks to minimize its own _ long - term _ payments for energy purchase and operational costs of energy storage by deciding how much energy to buy from the iso , and the iso seeks to minimize the _ long - term _ total cost of the system ( e.g. energy generation costs and the aggregators costs ) by dispatching the energy production among the generators . the decision making of the foresighted entities is complicated for two reasons . first , the information is decentralized among the entities : the iso does not know the aggregators states ( i.e. their energy consumption requests from customers and the amount of energy in their storage ) , and each aggregator does not know the other aggregators states or the iso s state ( i.e. the energy generation costs and the status of the transmission lines ) . second , the coupling among the aggregators is unknown to them due to their limited information . specifically , each aggregator s energy purchase affects the price , and hence the payments of the other aggregators . however , none of them knows how its decision influences the price because the price is determined by the iso based on its state . we propose a design framework in which the iso provides each aggregator with a conjectured future price , and each aggregator distributively minimizes its own long - term cost based on its conjectured price as well as its locally - available information . the proposed framework can achieve the social optimum despite being decentralized and involving complex coupling among the various entities interacting in the system . simulation results show that the proposed foresighted demand side management achieves significant reduction in the total cost , compared to the optimal myopic demand side management ( up to 60% reduction ) , and the foresighted demand side management based on the lyapunov optimization framework ( up to 30% reduction ) . |
yield stress materials denote assemblies of mesoscopic constituents such as colloids , droplets or bubbles that display macroscopic properties intermediate between those of fluid and solid . at rest , or under low stresses ,yield stress materials display a solid - like behaviour whereas they flow like liquids above a certain threshold referred to as the yield stress .their solid - like behaviour at rest originates either from a densely packed microstructure composed of soft objects such as in dense emulsions , jammed microgels , etc . , or from the existence of weak attractive interactions that bind the constituents together and result in the formation of a sample - spanning network .in the latter case , solid - like properties emerge even at low packing fractions .this defines a category referred to as _ attractive gels _ which encompasses various systems such as clay suspensions , carbon black gels and colloid - polymer mixtures .all sorts of colloids and larger particles of different shapes and attraction potential also comply with this definition . over the past ten years, the rheological behaviour of attractive gels has proved to be by far one of the most challenging to understand among non - newtonian fluids . in particular , their mechanical properties at restare strongly time - dependent : attractive gels display a reversible aging dynamics driven by the weak attractive forces between its constituents and that can be reversed by shear . as a result , the gel elastic properties display a slow logarithmic or weak power - law increase with time , up to complete demixing of the system , which is historically referred to as syneresis .to make matter worse , particles are often denser than the surrounding fluid which fosters syneresis and may trigger the collapse of the gel .such density mismatch is further suspected to influence the behaviour of these systems under flow , although clear experimental evidence is still lacking .nonetheless , it is now well established that the behaviour of attractive gels under external shear involves heterogeneous flows that are highly sensitive to preshear history , boundary conditions , and/or finite size effects .for instance , one can emphasize the case of laponite suspensions which steady - state flow properties were shown to be influenced by the nature of the boundaries : under external shear , smooth walls lead to the complete fluidization of the gel and to linear velocity profiles , while rough boundary conditions allow the growth of a steady shear band .moreover , as an illustration of the impact of confinement on flows of attractive gels , one can mention the spectacular shear - induced structuration observed at moderate shear rates and reported in silica suspensions , gels of anisotropic particles , attractive emulsions and carbon black gels . in such cases ,the gel rearranges to form a striped pattern of logrolling flocs aligned along the vorticity direction , which origin and formation mechanism are still highly debated . beyond the effects of bounding walls and confinement , attractive interactions alone are also responsible for long - lasting transients under external shear .on the one hand , experiments performed under constant and oscillatory stress reveal that the fluidization process of attractive gels initially at rest , may take up to tens of hours .experiments coupled to velocimetry have revealed that such a process is mostly activated , as evidenced by the arrhenius - like dependence of the fluidization time with the applied shear stress , and that it is strongly heterogeneous in both the vorticity and the flow directions . on the other hand ,attractive gels show an overshoot in the stress response to shear startup experiments .such behaviour , which strongly depends on the preshear history , corresponds to the orientation and subsequent rupture of the gel microstructure into clusters . beyond the yield point ,attractive gels either display homogeneous or shear - banded flows depending on the applied shear rate and on the boundary conditions .however , only a handful of studies have investigated the influence of the sample age , i.e. the duration separating preshear from the start of the experiment , on such shear - rate - induced fluidization scenario .recently , martin and hu have shown on laponite suspensions that aged - enough samples tend to exhibit long - lasting though transient shear - banding , and that shear - banding may also be trapped by the rapid aging of the non - flowing band and become permanent .the latter scenario is remarkable as it strongly differs from the classic shear - banding scheme which relies on the mechanical instability of the sample under scrutiny .this study highlights the interplay between the sample age and the shear and strongly urges to investigate the impact of sample age on the shear - induced fluidization scenario in other attractive gels . to summarize , attractive gels neither entirely behave as solids nor as fluids over a wide range of timescales . in the landscape of non - newtonian fluids, they define a rather singular category of materials rightfully referred to as the twilight zone " in the early classification established by l. bilmes , that was recently adapted to complex fluids by g.h .mckinley .we barely start to understand the ( transient ) rheology of attractive gels and more experiments are obviously needed to shed some light on the heterogeneous flows of such highly time - dependent materials . in the present manuscriptwe report spatially resolved data on the fluidization dynamics of an attractive gel composed of weakly attractive non - brownian particles .velocimetry performed in a concentric cylinder geometry simultaneously to shear startup experiments reveals that the steady - state behaviour is a subtle function of both the time during which the system was left to age before the beginning of the experiment , and of the value of the applied shear rate .extensive experiments allow us to build a state diagram of steady - state flows in the ( , ) plane .two distinct regions roughly emerge : ( ) homogeneous flows for shear rates larger than a critical value that weakly increases with the aging time , and ( ) steady - state shear banding elsewhere . as a key result of this work , the complete fluidization observed in the upper region of the state diagram involves a transient shear band that is progressively eroded through a series of dramatic fluidization events .these avalanche - like processes show as large peaks in the temporal evolution of both the global shear stress and the slip velocity measured at the moving wall of the shear cell .as further confirmed by two - dimensional ultrasonic imaging , this fluidization process is spatially heterogeneous and may occur at different locations along the vorticity direction . finally , for a range of parameters ( , ) in the vicinity of the boundary between the two main regions of the state diagram , we observe large fluctuations in the stress and slip velocity signals ,although the system does not reach complete fluidization .such avalanche - like events are strongly coupled to variations in both the width of the shear band and the slip velocity . the present work offers to our knowledge among the first experimental evidence of local avalanche - like fluidization events in a weak attractive gel under shear .it also provides an extensive data set to test the relevance of the flow stability criteria for shear banding and stands as a new challenge for spatially resolved models .the experimental setup allows us to perform time - resolved velocimetry simultaneously to standard rheometry .the rheological measurements are performed in a polished taylor - couette ( concentric cylinder ) cell made of plexiglas ( gap mm ) in which the inner mooney - couette cylinder of angle , height 60 mm and radius 23 mm is connected to a stress - controlled rheometer ( arg2 , ta instruments ) and positioned at m from the bottom of the outer cylinder . a solvent trap located at the top of the rotor and a homemade lidare used to prevent evaporation efficiently up to about 9 hours .the taylor - couette cell is embedded in a water tank connected to a thermal bath which allows us to keep the sample at a constant temperature .local velocity profiles across the gap of the taylor - couette cell are recorded simultaneously to the global rheology by means of two different ultrasonic probes immersed in the water tank , which also ensures acoustic coupling to the shear cell .the first ultrasonic probe is a single high - frequency focused transducer that allows us to record the azimuthal velocity as a function of the radial position across the gap at the middle height of the shear cell i.e. at 30 mm from the bottom .full technical details on this one - dimensional ultrasonic velocimetry ( 1d - usv ) technique can be found in a previous publication .the second ultrasonic probe consists of a linear array of 128 transducers placed vertically at about 15 mm from the cell bottom .this transducer array is 32 mm high and gives access to images of the azimuthal velocity as a function of the radial position and vertical position over about 50% of the cell height .this two - dimensional ultrasonic velocimetry ( 2d - usv ) technique is thoroughly described in ref .both devices can be used simultaneously and roughly face each other in the water tank , i.e. they are separated by an azimuthal angle of about . while the 1d - usv setup has the advantage of a better spatial resolution ( about 40 m against 100 m ) , only the 2d - usv setup allows us to detect and monitor the presence of flow heterogeneities along the vorticity direction .both velocimetry techniques require that the ultrasonic beam crossing the gap of the shear cell is backscattered either by the fluid microstructure itself or by acoustic tracers added during sample preparation when the system is acoustically transparent . here, we shall emphasize that the microstructure of the sample further detailed below conveniently backscatters ultrasound in the single scattering regime , which allows us to monitor the fluid velocity in a fully non - invasive way .vs time after a preshear at s for 2 min .inset : same data set in semilogarithmic scales .measurements performed under oscillatory shear stress in the linear regime with frequency 1 hz and stress amplitude 0.05 pa .( b ) flow curve vs obtained by first decreasing continuously from 10 to 10 s in 75 logarithmically spaced points of duration s ( ) and then increasing over the same range ( ) . ]our ludox gels are prepared following the same recipe as that used by mller _. . as we shall show below, the system is composed of non - brownian particles made of permanently fused silica colloids .these particles are themselves reversibly aggregated in brine due to van der waals forces leading to a space - spanning gel with solid - like properties at rest . a stable suspension of silica colloids ( ludox tm-40 , sigma - aldrich , 40% wt . in silica colloids ) of typical diameter 20 nm [ see fig .[ fig1](a ) for a scanning electron microscopy ( sem ) image of a dilute sample ( supra 55 , vp zeiss ) ] and of is first poured without any further purification into a 10% wt .deionized aqueous solution of sodium chloride ( merck millipore ) up to a mass ratio ludox : nacl of 6:13 , corresponding to a final volume fraction of 7% in silica colloids and a final ph of .the mixture , which instantaneously becomes white and optically opaque , is then shaken intensely by hand for 2 min and left at rest for at least 15 h before being studied .such a drastic change in the sample turbidity strongly suggests the rapid formation of aggregates at the micron - scale .indeed , direct observations using different techniques confirm the existence of a much coarser microstructure than the initial nanometric silica colloids . on the one hand ,sem images of a dried droplet extracted from a fresh sample that has been previously diluted in a nacl solution unveil the presence of particles which size ranges from a few microns up to a hundred microns [ fig .[ fig1](b ) ] . on the other hand , bright - field microscopy images ( eclipse ti , nikon ) of the sampleneither altered nor diluted further confirm the existence of these micron - sized particles [ fig .[ fig1](c)-(e ) ] , which are stable in time and robust to repeated shear , as confirmed by similar observations performed on samples submitted to different shear histories ( images not shown ) . to account for the formation of such a large scale microstructurethat to our knowledge has not been reported in the literature previously , we propose the following scenario . above ph=7 ,silica colloids are negatively charged and bear silanol ( si ) and dissociated silanol groups that are poorly hydrated . in most previous works , nacl is added in a relatively small amount ( typically 0.050.5 m ) such that electrostatic repulsion is screened leading to the slow reversible aggregation of individual colloids up to the formation of a colloidal gel .here , we add a much larger amount of salt ( 1.2 m ) to the colloidal suspension , which leads to an ion exchange where protons are replaced by sodium ions .the loss of hydrogen bonding between the colloids and the solvent triggers the fast and irreversible aggregation of the silica colloids through the formation of interparticle siloxane bonds , resulting in the formation of the non - brownian particles described above .finally , these non - brownian particles also aggregate reversibly due to van der waals forces and form a space - spanning network , i.e. a gel , which mechanical behaviour is studied below .note that such microstructure scatters ultrasound efficiently , allowing us to use both 1d- and 2d - usv without requiring any seeding of the sample with tracer particles .the rheological features of the gel are displayed in fig .a strong preshear of s applied during 2 min fully fluidizes the system , which quickly rebuilds once the preshear is stopped and subsequently shows pronounced aging .indeed , small amplitude oscillatory shear reveals that the elastic modulus becomes larger than the viscous modulus within about 20 s [ see fig . 1 in the esi ] and that further increases logarithmically over more than 2 h [ fig .[ fig2](a ) ] .such aging dynamics are reproducible for a given preshear protocol and lead to the formation of a solid - like gel .the latter shows an elastic modulus that is nearly frequency independent and a critical yield strain of about 7% that weakly depends on the sample age [ see figs . 2 and 3 in the esi ] .such a solid - like behaviour also reflects in the nonlinear rheology of the gel .[ fig2](b ) shows the flow curve vs measured by sweeping down the shear rate from to s and back up .the flow curve shows an apparent yield stress of a few pascals and displays a complex non - monotonic shape together with strong hysteresis .velocity profiles recorded simultaneously to the up and down shear rate sweeps shown in fig .[ fig2 ] reveal that wall slip and heterogeneous flows are involved over large ranges of shear rates , below 20 s during the decreasing ramp and up to about 200 s during the increasing ramp [ see fig . 4 in the esi ] .in particular , the large stress peak observed in fig .[ fig2](b ) at s while ramping up the shear rate is the signature of the partial fluidization of the sample which moves as a plug and totally slips at both walls for s , before finally starting to flow homogeneously on the reversible branch at high shear rates i.e. above about 200 s . furthermore , in fig .[ fig2](b ) , the shear stress shows a minimum in between s and 40 s on the decreasing shear rate sweep , which hints at the existence of a critical shear - rate below which no homogeneous flow is possible .the latter result is confirmed by performing steady - state measurements at constant applied shear rate starting from the liquid state , i.e. on a fully fluidized sample , in order to avoid long - lasting transients that go along with shear startup experiments on a sample at rest , and which are at the core of section [ results ] . here , discrete shear rates of decreasing values ranging from 500 s down to 0.1 s are successively applied for at least 300 s each .the flow remains homogeneous down to s below which the sample exhibits an arrested band [ fig .[ fig2b](a ) ] .such a value of is comparable to the one reported in a previous work on very similar ludox gels .moreover , as the shear rate is decreased below , the size of the fluidized band decreases roughly linearly with [ fig .[ fig2b](b ) ] in agreement with the classical `` lever rule '' .the deviation of from a straight line could be related to the wall slip that is present at the rotor . , 30 , 20 , 10 , 5 and 1 s from top to bottom .the parameter denotes the distance to the rotor .these data have been recorded starting from large shear rates ( 500 s ) and decreasing by successive steps of long enough duration to achieve a steady state at every imposed shear rate .( b ) size of the fluidized band normalized by the gap width vs the applied shear rate .the dashed line corresponds to the standard `` lever rule '' with s . ] to conclude this subsection , complex cycles of hysteresis as the one reported in fig .[ fig2 ] have been also reported for numerous other attractive gels including carbon black gels and clay suspensions .although significant progress has been made in extracting quantitative information from hysteresis loops , we rather chose to focus our study on shear startup experiments in order to fully decouple the fluid dynamics from any time - dependent effect related to the experimental protocol .the present experiments are thus all performed on a sample prepared in a solid - like state , so as to investigate the shear - induced fluidization scenario of the non - brownian particle gel . in practice , after preparing the gel in a well - defined and reproducible initial state thanks to preshearing , we monitor the stress response of the material to a constant shear rate over long durations .the results are discussed in section [ results ] .prior to any shear startup experiment , the sample is presheared at s for 2 min in order to erase any previous shear history .we check systematically that the velocity profiles during the preshear step become homogeneous across the gap within a duration shorter than 2 min .the system is then left to rebuild for a duration that ranges between 1 and 100 min , and during which we apply small amplitude oscillatory shear stress ( stress amplitude pa , frequency hz ) to monitor the evolution of the gel linear viscoelastic properties and make sure that for a given value of , the gel reaches an initial state that is reproducible from one experiment to another . finally , a constant shear rate is applied to the material over a long duration ( s ) while we monitor the stress response together with local velocity profiles .we first fix the duration between the preshear and the start of the experiment to min and discuss the material response for shear startup experiments performed at different shear rates . for shear rates lower than about , the stress relaxes smoothly towards a constant value as illustrated in figure [ fig3](a ) for an experiment performed at s .velocity profiles acquired simultaneously reveal that the material remains at rest near the stator , while it flows in the vicinity of the rotor where it shows strong wall slip [ fig .[ fig3](b ) ] : the sample displays steady - state shear banding . for larger shear rates ,one observes a completely different scenario , both from global and local measurements .the stress displays a series of spike - like relaxation events , during which stress increases of small amplitude are followed by large drops [ fig .[ fig3](a ) ] .local measurements show that the gel flows heterogeneously . however , the shear band is only transient and the system always ends up being homogeneously sheared at steady state [ fig . [ fig3](c)(e ) ] .qualitatively , the width of the shear band increases while strong wall slip is observed at the rotor , until the whole gap is fluidized and wall slip becomes negligible .this scenario is robustly observed at different shear rates and full fluidization occurs sooner for larger shear rates [ fig . [ fig3](a ) ] . a quantitative analysis of the velocity profilesis further proposed in section [ localanalysis ] ., applied shear rate ) plane . in steady state ,the gel may either be fully fluidized ( , blue region ) or display shear banding .steady shear banding is represented in ( ) symbol which size encodes the portion of the gap that is being sheared .unsteady banding , which denotes flows where the band width and the slip velocity at the rotor display significant fluctuations , is represented by ( ) . ] to evidence the impact of the aging time on the material response , figs .[ fig3](f)(j ) show similar experiments performed for different at the same shear rate s . for long waiting times between the preshear and the start of the experiments , e.g. min, one observes a smooth stress relaxation here again associated with steady shear - banded velocity profiles [ fig .[ fig3](f ) and ( g ) ] .decreasing to 30 or 5 minutes leads to a gel of lower elastic modulus .local measurements further reveal that these weaker gels go through a transient shear - banding regime and that in both cases the steady state is homogeneous [ fig .[ fig3](h ) and ( i ) ] . here , unlike the case of transientshear banding reported in fig .[ fig3](a ) for large shear rates in samples left to age for min where fluidization corresponds to a long series of successive stress relaxations , fluidization proceeds in a single stress drop together with small noisy fluctuations [ fig .[ fig3](f ) ] .finally , one observes that very young gels ( min ) barely show any heterogeneous velocity profile during startup flow and reach a homogeneous steady state within a few tens of seconds [ fig .[ fig3](j ) ] . in summary ,the longer the sample ages after preshear , the more likely it is to exhibit a long - lasting heterogeneous fluidization scenario or to display steady shear banding .s and min .( a ) shear stress ( ) and slip velocity ( ) vs time .( b ) width of the fluidized shear band normalized by the gap width vs time .( c ) local shear rate within the shear band ( ) and global shear rate ( ) vs time .the horizontal dotted line indicates the shear rate applied by the rheometer . in ( a)-(c ) , the vertical dashed line indicates the fluidization time . ]s and min .( a ) shear stress ( ) and slip velocity ( ) vs time . ( b )width of the fluidized shear band normalized by the gap width vs time .( c ) local shear rate within the shear band ( ) and global shear rate ( ) vs time .the horizontal dotted line indicates the shear rate applied by the rheometer . ] to go one step further and get an overall picture of the material response , we have performed extensive shear startup experiments by varying systematically both the aging time ( min ) and the applied shear rate ( s ) .the entire data set is summarized in the flow state diagram pictured in fig .[ fig4 ] , where the different symbols code for the three different scenarios that we may distinguish at the end of each startup experiment lasting typically 1500 to 3600 s. in steady state , the attractive gel may either fully fluidize after a transient phase possibly involving strong fluctuations ( in fig .[ fig4 ] ) , as observed in the upper part of the state diagram for large shear rates quite independently of the sample age , or exhibit shear banding .the boundary between these two regimes defines a critical shear rate that depends in a nonlinear fashion on the sample age .furthermore , we can discriminate between two types of behaviour when shear banding is observed .steady _ shear banding ( in fig .[ fig4 ] ) , for which both the width of the shear band and the slip velocity at the rotor display negligible fluctuations in steady state , is observed in particular at low aging times min and under low enough shear rates , typically below 100 s . in that case , the width of the arrested band may decrease ( s ) or remain constant ( s and 100 s ) for increasing shear rates .second , we have also observed _ unsteady _ shear banding ( in fig . [ fig4 ] ) . in this caseboth global and local measurements display significant fluctuations in steady state .these fluctuations are strikingly similar to those observed during the transients leading to complete fluidization in the upper part of the diagram .however here the material never gets entirely fluidized and the shear band width does not show a systematic evolution towards so that an unsteady shear band persists in steady state at least within the finite duration of the experiments .these fluctuations are investigated in more details in the next section . in this section ,we focus on the strong fluctuations that are observed ( i ) during transient regimes leading to complete fluidization and ( ii ) during unsteady shear - banded flows at steady state .global rheological data together with the detailed analysis of the corresponding local measurements are reported in figs .[ fig5 ] and [ fig6 ] respectively ( see also corresponding movies 1 and 2 in the esi ) .local data are analyzed as follows : linear fits of the velocity profiles in the fluidized part of the gap are used to estimate the local shear rate and to extrapolate the sample velocity at the rotor .the width of the shear band is obtained as the abscissa of the intersection between the fit and the zero velocity axis , while the slip velocity is given by , where is the rotor velocity . finally , the global shear rate is defined as .we first discuss fig .[ fig5 ] which shows a case of full fluidization for s on a gel left to age for min .the stress relaxes by successive jumps up to full fluidization which occurs at s and after which remains roughly constant [ fig .[ fig5](a ) ] .the temporal evolutions of [ fig .[ fig5](b ) ] and of the local shear rate [ fig .[ fig5](c ) ] show that fluidization occurs by successive spatial `` avalanches '' that are directly correlated to the stress drops . during the whole fluidization process, the slip velocity at the rotor decreases towards a negligible value that is reached at , diminishing by jumps that are in phase with the stress evolution .figure [ fig6 ] focuses on spatiotemporal fluctuations observed during steady - state shear - banded flows for the same aging time ( min ) as in fig .[ fig5 ] but sheared at a lower shear rate ( s ) .the stress exhibits periods of slow increase followed by rapid drops [ fig . [fig6](a ) ] . within an hour ,about half the gap gets fluidized and the size of the fluidized band shows pronounced increase during short period of times , that are synchronized with the stress drops [ fig .[ fig6](b ) ] .the dynamics of the fluid at the rotor is strongly correlated to the global fluctuations , as evidenced by the sudden peaks of the slip velocity when the stress drops .note that , although the slip velocity decreases in average over the whole duration of the experiment , it remains at a high level of about 20% , in noticeable contrast with the fully fluidized scenario described in the previous paragraph .such oscillatory dynamics in the vicinity of the rotor are reminiscent of stick - slip .indeed , the fluidized band shows time interval during which the local shear rate remains constant [ fig . [ fig6](c ) ] , while the band width slowly decreases . as a result , the stress slowly builds up , until the fluidized band experiences a large slip event at the moving wall and gets rejuvenated .these dynamics look similar to the stick - slip motion reported in clays in the pioneering work of pignon _ et al .however , in the latter case , stick - slip occurs along fracture planes located in the bulk sample , while here slip at the wall appears to play a key role .finally , we shall emphasize that in both types of scenarios , the peaks in both the stress and the slip velocity are only seen in the presence of shear banding .although it is not clear which of the shear banding or of the slip at the rotor is the cause of the oscillations , these peculiar dynamics hint toward a subtle flow microstructure coupling that certainly deserves more investigation .vs time during a single stress drop event extracted from shear startup experiment during which the material is fully fluidized ( s , min ) .inset : stress vs time for the whole experiment .the signal pictured in the main graph appears in red .( b ) spatiotemporal diagram of the velocity data as a function of position and time .the radial position is measured from the rotating inner wall .data obtained with 1d - usv .( c ) spatiotemporal diagram of the velocity data as a function of the vertical position and time at mm .data obtained with 2d - usv .the vertical position is measured from the top of the transducer array . in both ( b ) and ( c ), the fluid velocity is color coded in mm.s . ]vs time during a single stress relaxation episode extracted from shear startup experiment at the end of which the material is fully fluidized ( s , min ) .( b ) spatiotemporal diagram of the velocity data as a function of the vertical position and time at mm .data obtained with 2d - usv .the vertical position is measured from the top of the transducer array .the fluid velocity is color coded in mm.s . ]vs time during a single stress relaxation episode extracted from shear startup experiment at the end of which the material is fully fluidized ( s , min ) .( b ) spatiotemporal diagram of the velocity data as a function of the vertical position and time , at mm .data obtained with 2d - usv .the vertical position is measured from the top of the transducer array .the fluid velocity is color coded in mm.s.,width=325 ] to get better insight on the avalanche - like fluidization scenario , we now focus on the single stress drop shown in fig .[ fig7](a ) .this event is extracted from a shear startup experiments performed at s for min and the steady - state flow corresponds to a fully fluidized sample [ inset of fig . [ fig7](a ) ] .the velocity profiles recorded simultaneously with 1d - usv are plotted as a spatiotemporal diagram in fig .[ fig7](b ) .quite surprisingly the one - dimensional flow profiles before and after the stress drop are very similar as they all show shear localization over about half the gap . by the time the stress reaches its maximum value , about 80% of the gapis sheared .then shear abruptly localizes again over about 1 mm close to the rotor at the beginning of the stress relaxation . therefore , although the strong fluctuations of observed before the stress peak appear to be correlated to those of , the drop of about 15% in the stress value can not be explained by these one - dimensional data in a straightforward manner .we emphasize that the 1d - usv measurements are performed at a given height of the taylor - couette cell so that they may reflect the evolution of global rheology only if the flow is homogeneous along the vorticity direction . as a matter of fact ,velocity profiles recorded simultaneously over the whole height of the taylor - couette cell through 2d - usv demonstrate that the flow is strongly heterogeneous in the vertical direction .figure [ fig7](c ) shows a spatiotemporal plot of the fluid velocity along the vertical axis at a fixed radial position mm close to the rotor ( see also movie 3 in the esi ). depending on the position along the -axis , the material can simultaneously be either solid - like as evidenced by areas of very low velocities in fig .[ fig7](c ) ( see for instance at the top of the region of interest for mm ) or fluid - like and flow at high velocities as observed for and mm . note that the 1d - usv measurements are performed at mm and are fully consistent with the 2d - usv data .our measurements further reveal that the particular avalanche - like event analyzed here only corresponds to partial fluidization : at the end of the stress drop , the flow is still heterogeneous , even though arrested regions for s has given way to flowing regions arranged in a vertically banded structure for s. in fact , the inset of fig .[ fig7](a ) shows that several subsequent stress drop events still have to take place before complete fluidization of the sample . to test the existence of a local fluidization scenario that would be generic to all avalanche - like events , we analyze additional experiments in the full fluidization region of the state diagram at and 200 s .the temporal evolution of the shear stress is reported respectively in figs .[ fig8](a ) and [ fig9](a ) and the corresponding 2d - usv measurements are displayed as spatiotemporal diagrams in figs .[ fig8](b ) and [ fig9](b ) for a radial position mm ( see also movies 4 and 5 in the esi ) . here again , the temporal evolution of the velocity field is strongly heterogeneous along the vertical direction in both experiments .furthermore , despite these two experiments are performed at comparable shear rates , the local fluidization scenario is strikingly different . in the shear startup experiment reported in fig .[ fig8 ] , the lower part of the region of interest is fully fluidized after the first stress relaxation while the upper part of the sample necessitates two supplemental avalanches to in turn fully fluidize . on the other hand , the fluidization process shown in fig .[ fig9 ] starts from the upper part of the region of interest before extending to the lower part of the taylor - couette cell .here , each avalanche - like event involves large pieces of the sample with a typical vertical extension of about 5 mm [ see events marked by white dotted lines in fig . [fig9](a ) ] . some of these events only show up in the stress response as very small peaks while their local signature is much more impressive ( see , e.g. , the event at s ) . for to 500 s , the sample even appears to fluidize , or at least to be set into motion , through regular steps occuring from top to bottom with a characteristic time of 1020 s. the experiments shown in figs .[ fig8 ] and [ fig9 ] allow us to conclude that the peaks in the stress signal may encompass very different local scenarios .since both experiments were performed with the same aging time min , these results illustrate the high sensitivity to `` initial conditions , '' i.e. to the possibly different arrangement of the heterogeneous microstructure after the aging process , and the subtle interplay between aging and shear banding with no systematic failure scenario along the cell height .on the one hand , we conclude from section [ sampleprep ] that the gel under scrutiny displays a discontinuous yielding transition that is very similar to the one reported in other attractive gels .indeed , one observes that , when the initial condition is a completely fluidized state , applying a series of shear rates of decreasing values leads to the growth of a steady - state arrested band below a critical shear rate . the value of is inherent to the sample and results from the flow instability at low shear ratesthis is fully consistent with previous results on ludox gels . on the other hand , the shear startup experiments reported in section [ results ] and performed on the sample prepared in the solid state allow us to identify a second critical shear rate . for applied shear rates such as , the sample gets only partially fluidized and exhibits steady - state shear banding , while long - lasting yet transient shear bandingis observed for .such a behaviour can be interpreted in the framework recently described by martin and hu on laponite samples . under an external shear ,the sample in the vicinity of the rotor that is in initially at rest gets fluidized , while the sample close to the stator remains at rest and keeps aging .depending on the intensity of the shear rate , such heterogeneous velocity profile may either be trapped by the sample aging and become permanent as observed below , or become homogeneous after a transient shear - banding regime which duration decreases for increasing values of .such an argument allows us to understand that will be sensitive to the gap size and to the cell geometry , which we have verified by performing identical shear startup experiments with different gap widths .these experiments show that decreases as the gap size is reduced [ see fig . 6 in the esi ] . therefore ,steady - state shear banding trapped by aging should be distinguished from shear banding resulting from intrinsic flow instability which should not depend on the geometry .finally , we have shown in section [ pd ] that is a nonlinear function of the sample age , again in contrast with . regarding the transient fluidization dynamics , our work unravels the existence of very characteristic peaks in the global rheological data .these peaks correspond to local avalanches associated with the abrupt fluidization of shear - banded velocity profiles .an avalanche proceeds in two steps .first , the sample ages as evidenced by the slow increase of the stress indicative of progressive consolidation , while the shear band remains roughly fixed .second , the sample suddenly fluidizes before localizing again , at least partially , while the stress drops and the shear rate increases .this scenario strikingly recalls the transient shear banding reported in laponite clay suspensions and fits well with the stability criterion recently proposed by fielding __ . in practicethough , the physical origin of the avalanche - like and successive stress relaxation events remains to be determined .aging is more pronounced in our system than in the attractive gels that have been studied previously as evidenced from the large values of after preshear [ see inset of fig .[ fig2](a ) ] . therefore, syneresis driven by the aggregation and/or sedimentation of the colloidal flocs due to their density mismatch with the surrounding fluid could also play a role and interfere with the traditional picture of a competition between aging and shear rejuvenation .as syneresis is negligible in most of the colloidal gels that have been the topic of rheophysical studies so far , it could also explain why such stress oscillations have , to our knowledge , never been reported in the literature .an alternative explanation for the stress oscillations could be related to confinement .indeed , as the size distribution of the fused silica aggregates is wide and extends up to 100 m , the sample mesostructure most likely involves aggregates which size becomes comparable to that of the gap , at least for long enough rest durations . in this framework, stress oscillations would result from a competition between shear - induced structuration as described in the introduction and the strong aging of the sample .such an interpretation would also account for the stick - slip like motion of the fluidized band at the moving wall .nonetheless , despite systematic monitoring of the sample during shear startup experiments , no structuration or spatial pattern could be observed .moreover , supplemental shear startup experiments in narrower gaps show that for a given shear rate , stronger confinement leads to the disappearance of the stress oscillations and to homogeneous velocity profiles [ see fig .6 in the esi ] .this last result strongly suggests that confinement alone can not account for stress oscillations .finally , the present study has focused on experiments performed under smooth boundary conditions , revealing the presence of strong wall slip associated with heterogeneous , shear - banded flows while fully fluidized states show negligible wall slip . yetthe roughness and/or chemical nature of the walls are known to have a crucial influence not only on rheological measurements but also on local flow both close to the walls and in the bulk .therefore the influence of boundary conditions on the complex fluidization scenario reported here appears as the next key question to address in future work . as a first step ,we report preliminary tests on the role of boundary conditions on the above results in the esi .the flow curve measured under `` rough '' boundary conditions in a sand - blasted plexiglas taylor - couette cell with typical roughness of 1 m ( to be compared to a few tens of nanometers for the `` smooth '' cell used so far ) shows a smaller , yet significant hysteresis [ see fig . 1(b ) in the esi ] as well as wall slip at low shear rates ( see fig . 5 in the esi ) .furthermore , one can see a strong difference between the velocity profiles recorded simultaneously to the flow curve respectively with the rough and smooth boundary conditions ( compare fig . 4 and 5 in the esi )although the surface roughness of the rough boundary does not match exactly the size of the microstructure .in particular the transient fluidization episode reported around 7 s with smooth boundary conditions [ fig .[ fig2](b ) and fig . 4 in the esi ]is absent with rough walls ( see fig . 5 in the esi ) .these preliminary results illustrate the strong impact of the boundary conditions and urge for systematic experiments so as to quantify the impact of the boundary conditions especially on the state diagram reported in fig .we have investigated the local scenario associated with the shear - induced fluidization of an attractive gel made of non - brownian particles . we have identified a critical shear rate that separates steady shear - banded flows from full fluidization and that exhibits a nonlinear dependence with sample age .this critical shear rate is much larger that the one signaling flow instability in experiments starting from fluidized states and depends on the shear cell geometry as well as possibly on the preshear intensity imposed prior to the shear startup experiment . as a key result, we have shown that for shear rates larger than , the fluidization of the sample involves successive local avalanche - like events that are heterogeneously distributed along the cell height and which dynamics is strongly coupled to both the slip behaviour at the wall and the width of the shear band .such avalanches appear in the stress signal as peaks , which individual properties are strongly reminiscent of stick - slip phenomena .future work will focus on the early stage of shear startup experiments , and in particular on the stress overshoot that occurs before the stress relaxation , as well as on the influence of the boundaries and confinement on the fluidization scenario .the authors thank j .-chapel , b. keshavarz and g. ovarlez for stimulating discussions , t. gibaud for allowing us to use his microscope , j. laurent for her precious assistance with the sem experiments as well as two anonymous referees for their constructive remarks on our manuscript .this work was supported by jsps and cnrs under the japan - france research cooperative program ( prc cnrs / jsps rheovolc ) .a.k . acknowledges support by jsps kakenhi grant no .s.m . acknowledges funding from institut universitaire de france and from the european research council under the european union s seventh framework programme ( fp7/2007 - 2013 ) and erc grant agreement no . 258803 .74[1]#1 [ 1 ] [ 2 ] [ 3 ] [ 3 ] p. coussot , _ j. non - newtonian fluid mech . _ , 2015 , * 211 * , 3149 d. bonn , j. paredes , m. denn , l. berthier , t. divoux and s. manneville , _ arxiv:1502.05281 _ , 2015 p. c. f. mller , a. fall and d. bonn , _ europhys ._ , 2009 , * 87 * , 38004 g. ovarlez , s. cohen - addad , k. krishan , j. goyon and p. coussot , _ j. non - newtonian fluid mech ._ , 2013 , * 193 * , 6879 n. balmforth , i. frigaard and g. ovarlez , _ annu . rev. fluid mech ._ , 2014 , * 46 * , 121146 r. bonnecaze and m. cloitre , _ adv . polym ._ , 2010 , * 236 * , 117161 t. g. mason , j. bibette and d. a. weitz , _ j. colloid interface sci . _ , 1996 ,* 179 * , 439448 m. cloitre , m. borrega , f. monti and l. leibler , _ c.r ., 2003 , * 4 * , 221230 j. seth , m. cloitre and r. bonnecaze , _ j. rheol ._ , 2006 , * 50 * , 353376 m. siebenbrger , m. fuchs and m. ballauff , _ soft matter _ , 2012 , * 8 * , 40144024 p. lu , e. zaccarelli , f. ciulla , a. b. schofield , f. sciortino and d. a. weitz , _ nature _ , 2008 , * 453 * , 499503 p. lu and d. weitz , _ annu. rev . condens .matter phys ._ , 2013 , * 4 * , 217233 a. mourchid , a. delville , j. lambard , e. lcolier and p. levitz , _ langmuir _ , 1995 , * 11 * , 19421950 f. pignon , a. magnin , j .- m .piau , b. cabane , p. lindner and o. diat , _ phys .e _ , 1997 , * 56 * , 32813289 e. paineau , i. bihannic , c. baravian , a .- m .philippe , p. davidson , p. levitz , s. funari , c. rochas and l. michot , _ langmuir _ , 2011 , * 27 * , 55625573 v. trappe and d. a. weitz , _ phys . rev_ , 2000 , * 85 * , 449452 v. trappe , v. prasad , l. cipelletti , p. n. segre and d. a. weitz , _ nature _ , 2001 , * 411 * , 772775 k. pham , s. egelhaaf , p. pusey and w. poon , _ phys .e _ , 2004 , * 69 * , 011503 e. mock and c. zukoski , _ j. rheol ._ , 2007 , * 51 * , 541559 n. reddy , z. zhang , m. lettinga , j. dhont and j. vermant , _ j. rheol . _ , 2012 , * 56 * , 11531174 l. cipelletti , s. manley , r. c. ball and d. a. weitz , _ phys ._ , 2000 , * 84 * , 22752278 c. derec , g. ducouret , a. ajdari and f. lequeux , _ phys, 2003 , * 67 * , 061403 g. ovarlez and x. chateau , _ phys .e _ , 2008 , * 77 * , 061403 a. negi and c. osuji , _ phys .e _ , 2009 , * 80 * , 010404 n. koumakis and g. petekidis , _ soft matter _ , 2011 ,* 7 * , 24562470 g. scherer , _ j. non - cryst . solids _ , 1989 , * 108 * , 1827 s. manley , j. skotheim , l. mahadevan and d. weitz , _ phys . rev .lett . _ , 2005 ,* 94 * , 218302 r. buscall , t. h. choudhury , m. a. faers , j. w. goodwin , p. a. luckham and s. j. partridge . ,_ soft matter _, 2009 , * 5 * , 13451349 g. brambilla , s. buzzaccaro , r. piazza , l. berthier and l. cipelletti , _ phys . rev ._ , 2011 , * 106 * , 118302 l. j. teece , m. a. faers and p. bartlett , _ soft matter _ , 2011 , * 7 * , 13411351 p. barlett , l. teece and m. faers , _ phys . rev .e _ , 2012 , * 85 * , 021404 r. buscall and i. mcgowan , _ faraday discuss ._ , 1983 , * 76 * , 277290 t. gibaud , c. barentin and s. manneville , _ phys . rev_ , 2008 , * 101 * , 258302 t. gibaud , c. barentin , n. taberlet and s. manneville , _ soft matter _ , 2009 , * 5 * , 30263037 j. j. v. degroot , c. w. macosko , t. kume and t. hashimoto , _j. colloid interface sci ._ , 1994 , * 166 * , 404413 r. navarrete , l. scriven and c. macosko , _ j. colloid interface sci . _ , 1996 ,* 180 * , 200211 a. montesi , a. pe and m. pasquali , _ phys .lett . _ , 2004 ,* 92 * , 058303 c. o. osuji , c. kim and d. a. weitz , _ phys .e _ , 2008 , * 77 * , 060402(r ) a. negi and c. osuji , _ rheol .acta _ , 2009 , * 48 * , 871881 v. grenard , n. taberlet and s. manneville , _ soft matter _ , 2011 , * 7 * , 39203928 j. vermant and m. j. solomon , _ j. phys . : condens ., 2005 , * 17 * , r187r216 v. gopalakrishnan and c. zukoski ,_ j. rheol . _ , 2007 , * 51 * , 623644 t. gibaud , d. frelat and s. manneville , _ soft matter _ , 2010 , * 6 * , 34823488 j. sprakel , s. lindstrm , t. kodger and d. weitz , _ phys ._ , 2011 , * 106 * , 248303 v. grenard , t. divoux , n. taberlet and s. manneville , _ soft matter _ , 2014 , * 10 * , 15551571 a. stickland , a. kumar , t. kusuma , p. scales , a. tindley , s. biggs and r. buscall , _ rheol. acta _ , 2015 , * 54 * , 337352 h. chan and a. mohraz , _ phys ., 2012 , * 85 * , 041403 c. perge , n. taberlet , t. gibaud and s. manneville , _ j. rheol . _ , 2014 , * 58 * , 13311357 p. varadan and m. solomon , _ langmuir _ , 2001 , * 17 * , 29182929 a. mohraz and m. solomon , _ j. rheol . _ , 2005 , * 49 * , 657681 b. rajaram and a. mohraz , _ soft matter _ , 2010 , * 6 * , 22462259 t. divoux , m .- a .fardin , s. manneville and s. lerouge , _ annu . rev .fluid mech . _ , 2016 , * 48 * , 81103 j. s. raynaud , p. moucheront , j. c. baudez , f. bertrand , j. p. guilbaud and p. coussot , _ j. rheol ._ , 2002 , * 46 * , 709732 j. martin and y. hu , _ soft matter _ , 2012 , * 8 * , 69406949 g. ovarlez , s. rodts , x. chateau and p. coussot , _ rheol .acta _ , 2009 , * 48 * , 831844 a. fall , j. paredes and d. bonn , _ phys . rev. lett . _ , 2010 ,* 105 * , 225502 l. bilmes , _ nature _ , 1942 , * 150 * , 432433 g. mckinley , _ rheology bulletin _ , 2015 , * 84 * , 1417 r. moorcroft and s. fielding , _ physlett . _ , 2013 , * 110 * , 086001 j. colombo , a. widmer - cooper and e. del gado , _ phys .lett . _ , 2013 , * 110 * , 198301 j. colombo and e. del gado , _ j. rheol . _ , 2014 , * 58 * , 10891116 s. fielding , _ rep ._ , 2014 , * 77 * , 102601 s. manneville , l. bcu and a. colin , _ eurj. ap _ , 2004 , * 28 * , 361373 t. gallot , c. perge , v. grenard , m .- a .fardin , n. taberlet and s. manneville , _ rev .instrum . _ , 2013 , * 84 * , 045107 p. c. f. mller , s. rodts , m. a. j. michels and d. bonn , _ phys . rev .e _ , 2008 , * 77 * , 041507 m. kobayashi , f. juillerat , p. galletto , p. bowen and m. borkovec , _ langmuir _ , 2005 , * 21 * , 57615769 w. heston , r. iler and g. sears , _ j. chem_ , 1960 , * 64 * , 147150 j. trompette and m. clifton , _ j. colloid interface sci . _ , 2004 ,* 276 * , 475482 x. j. cao , h. z. cummins and j. f. morris , _ soft matter _ , 2010 , * 6 * , 54255433 d. truzzolillo , v. roger , c. dupas , s. mora and l. cipelletti , e - print cond - mat/1411.2265 l. allen and e. matijevi , _ j. colloid interface sci ._ , 1969 , * 31 * , 287296 l. allen and e. matijevi , _ j. colloid interface sci . _ , 1970 , * 33 * , 420429 j. depasse and a. watillon , _ j. colloid interface sci . _ , 1970 , * 33 * , 430438 j. depasse , _ j. colloid interface sci . _ , 1997 , * 194 * , 260262 e. drabarek , j. bartlett , h. hanley , j. woolfrey and c. muzny , _ int .j. thermophys ._ , 2002 , * 23 * , 145160 p. coussot , j. s. raynaud , f. bertrand , p. moucheront , j. p. guilbaud , h. t. huynh , s. jarny and d. lesueur , _ phys . rev. lett . _ , 2002 , * 88 * , 218301 a. ragouilliaux , b. herzhaft , f. bertrand and p. coussot , _ rheol .acta _ , 2006 , * 46 * , 261271 a. ten brinke , l. bailey , h. lekkerkerker and g. maitland , _ soft matter _ , 2007 , * 3 * , 11451162 t. divoux , v. grenard and s. manneville , _ phys .lett . _ , 2013 , * 110 * , 018304 v. viasnoff and f. lequeux , _ phys ._ , 2002 , * 89 * , 065701 f. pignon , a. magnin and j .- m .piau , _ j. rheol . _ , 1996 ,* 40 * , 573587 p. coussot , q. d. nguyen , h. t. huynh and d. bonn , _ j. rheol . _ , 2002 , * 46 * , 573589 p. coussot , h. tabuteau , x. chateau , l. tocquer and g. ovarlez , _ j. rheol ._ , 2006 , * 50 * , 975994 g. ovarlez and p. coussot , _ physe _ , 2007 , * 76 * , 011406 p. c. f. mller , j. mewis and d. bonn , _ soft matter _ , 2006 ,* 2 * , 274283 j. seth , c. locatelli - champagne , f. monti , r. bonnecaze and m. cloitre , _ soft matter _ , 2012 , * 8 * , 140148 v. mansard , l. bocquet and a. colin , _ soft matter _ , 2014 , * 10 * , 69846989 [ [ section ] ]) and viscous moduli ( ) vs time after a preshear at s for 2 min ( hz , pa ) .same data as in fig .2(a ) in the main text .( b ) flow curve , shear stress applied shear rate , obtained by decreasing ( ) and increasing ( ) shear rate in order from 10 to 10 s with a waiting time of s per point .black ( resp .red ) symbols correspond to results in a smooth ( resp .rough ) geometry . ]five movies are provided as supplemental materials to figs .59 of the main text .all movies illustrate the temporal evolution of the global rheology and of the 1d and/or 2d velocity profiles recorded simultaneously to the rheology during successive avalanche - like events .supplemental fig .[ supfig1](a ) displays the evolution of the elastic and viscous moduli of the sample after preshear .the gel rebuilds quickly as evidenced by the fact that the elastic modulus is larger than the viscous modulus about 20 s after preshear has been stopped . supplemental fig .[ supfig1](b ) shows flow curves vs measured by rapidly sweeping down the shear rate from to s and back up .the flow curves have complex non - monotonic shapes together with strong hysteresis under both smooth and rough boundary conditions . supplemental fig . [ supfig2 ] illustrates the result of a frequency sweep performed at a constant strain ( % ) from 10 to 0.02 hz , and for different aging times , 60 and 100 min after stopping the preshear .the elastic modulus is independent of the frequency and increases with the aging time in agreement with supplemental fig .[ supfig1 ] .the viscous modulus shows a broad minimum with no sign of increasing part at low frequencies , a feature that is typical of soft glassy systems . ) and viscous ( ) moduli vs frequency .the frequency sweep is performed at a constant strain % and at different aging times after preshear .[ color , ( min ) ] : ( , 30 ) ; ( , 60 ) ; ( , 100 ) . ] supplemental fig .[ supfig3 ] shows the results of a strain sweep performed on the gel at a constant frequency hz for different aging times after preshear . the gel yields at a constant strain of about 7 % that is roughly independent of the aging time .) and viscous ( ) moduli vs the strain amplitude .the strain sweep is performed at a fixed frequency hz with a waiting time of 8 s per point , and at different times after the preshear .[ color , ( min ) ] : ( , 30 ) ; ( , 60 ) ; ( , 100 ) .the vertical dashed line at % emphasizes the yield point defined as the point at which . ]supplemental figs .[ supfig4 ] and [ supfig5 ] show the rheological and 2d - usv data obtained in smooth and rough boundary conditions respectively , by first decreasing the shear rate and then increasing it back again .the corresponding flow curves are shown in fig .1(b ) of the present esi .here we also show spatiotemporal plots of the velocity profiles as a function of the radial position at a given height ( mm ) in the taylor - couette cell [ supplemental figs .[ supfig4](b ) and [ supfig5](b ) ] , and as a function of the vertical position , at a given radial position inside the gap ( mm ) [ supplemental figs . [ supfig4](c ) and [ supfig5](c ) ] .velocity data are shown only for and 340 s in smooth and rough boundary conditions respectively , due to the limitation in the ultrasonic pulse repetition frequency to 20 khz , which sets an upper bound on the measurable velocities . and shear stress vs time obtained by first decreasing continuously from to s in 75 logarithmically spaced points of duration s each , and then increasing over the same range .( b ) spatiotemporal diagram of the velocity data as a function of the distance to the rotor and time , at mm .( c ) spatiotemporal diagram of the velocity data as a function of the vertical position and time , at mm .the vertical position is measured from the upper boundary of the 2d - usv probe .the fluid velocity is color - coded in mm.s . ] for smooth boundary conditions , the flow curve exhibits a large hysteresis , mainly localized in a narrow range of shear rates [ supplemental fig .[ supfig1](b ) ] . during the decreasing ramp of ,the sample which is first sheared homogeneously at large shear rates , enters a total wall slip regime at s , i.e. , for s [ supplemental fig .[ supfig4](b ) ] .this regime persists until is increased back up again .the subsequent fluidization of the material is abrupt and occurs in a narrow range of shear rates , s , which coincides with the brutal increase of the global stress .supplemental fig .[ supfig4](b , c ) reveal that the subsequent drop in the stress corresponds to flow arrest , i.e. total wall slip , until the sample fluidizes again for s and that the local behavior of the sample is roughly homogeneous along the whole height of the cell during both ramps. supplemental fig .[ supfig5 ] illustrates the exact same protocol as that reported in supplemental fig .[ supfig4 ] , but performed in a rough taylor - couette cell . as also seen in supplemental fig .[ supfig1](b ) , the rheological hysteresis is far less pronounced in comparison to the data obtained with smooth boundary conditions . herethe flow remains homogeneous along the vertical axis during both ramps [ supplemental fig .[ supfig5](c ) ] .the velocity profiles at a given height in the taylor - couette cell also display very different behavior from the smooth case . in the decreasing ramp of , the stress plateaus in between s and s , while shear localizes at the rotor .below , the stress displays a kink towards a second plateau , while the sample exhibits a plug flow .increasing back up , the plug flow persists up to shear rates of a few 10 s .above the latter value , the fluidization of the sample proceeds from the rotor and involves transient banding .full fluidization is not available in supplemental fig .[ supfig5](b ) for the technical reason mentioned above .supplemental fig .[ supfig6 ] shows the influence of the gap size on the fluid response to a shear startup experiment performed at s on a sample left at rest for min . in a gap of size mm , the same value as in the main text, the stress response displays a large number a peaks together with unsteady shear banding over 2400 s [ supplemental fig .[ supfig6](b ) ] . decreasing the size of the gap changes radically the sample behavior . in a gaptwice as small ( mm ) , the stress response exhibits less peaks and the sample fluidizes entirely in about s [ supplemental fig . [ supfig6](c ) ] . decreasing the gap by a factor of 4 ( mm ) leads to a smooth stress response and to homogeneous velocity profiles from the beginning of the experiments , with significant slip at the moving boundary [ supplemental fig .[ supfig6](d ) ] .t. g. mason , j. bibette and d. a. weitz , _ phys .lett . _ , 1995 , * 75 * , 20512054 t. gallot , c. perge , v. grenard , m .- a .fardin , n. taberlet and s. manneville , _ rev .instrum . _ , 2013 , * 84 * , 045107 | we report on the fluidization dynamics of an attractive gel composed of non - brownian particles made of fused silica colloids . extensive rheology coupled to ultrasonic velocimetry allows us to characterize the global stress response together with the local dynamics of the gel during shear startup experiments . in practice , after being rejuvenated by a preshear , the gel is left to age during a time before being submitted to a constant shear rate . we investigate in detail the effects of both and on the fluidization dynamics and build a detailed state diagram of the gel response to shear startup flows . the gel may either display transient shear banding towards complete fluidization , or steady - state shear banding . in the former case , we unravel that the progressive fluidization occurs by successive steps that appear as peaks on the global stress relaxation signal . flow imaging reveals that the shear band grows up to complete fluidization of the material by sudden avalanche - like events which are distributed heterogeneously along the vorticity direction and correlated to large peaks in the slip velocity at the moving wall . these features are robust over a wide range of values of and , although the very details of the fluidization scenario vary with . finally , the critical shear rate that separates steady - state shear - banding from steady - state homogeneous flow depends on the width on the shear cell and exhibits a nonlinear dependence with . our work brings about valuable experimental data on transient flows of attractive dispersions , highlighting the subtle interplay between shear , wall slip and aging which modeling constitutes a major challenge that has not been met yet . |
the spread of contagion ( information diffusion or spread of an infection ) is a universal phenomenon that is extensively studied in the context of physical , biological , and social networks .such cascades can have one or multiple sources ( or _ seeds _ ) and spread from infected nodes to neighbors through the link structure .a motivating application for the study of influence is viral marketing strategies , in which the influence of a set of people in a social network is the number of adoptions triggered if we give free copies of a product .the problem also has important applications beyond social graphs , such as placing sensors in water distribution networks for detecting contamination .a popular model for information diffusion is _ independent cascade _ ( ic ) , in which an independent random variable is associated with each ( directed ) edge to model the degree of influence of on .a single propagation instance is obtained by instantiating all edge variables .we then study the distribution of a property of interest , such as the number of infected nodes , over these random instances .the simplest and most studied ic model is _ binary ic _ , in which the range of the edge random variables is binary .a biased coin of probability is flipped for each directed edge .accordingly , the edge can be either _ live _ , meaning that once is infected , is also infected , or _null_. this model was formalized in a seminal work by kempe et al . and is based on earlier studies by goldenberg et al .note that each direction of an undirected edge may have its own independent random variable , since influence is not necessarily symmetric . a particular propagation instanceis specified by the set of live edges , and a node is infected by a seed set in this instance if and only if it is reachable from a seed node .the _ influence _ of is formally defined as the expectation , over instances , of the number of infected nodes . instead of working directly on this probabilistic ic model , kempe et al . proposed a simulation - based approach , in which a set of propagation instances ( graphs ) is generated in monte carlo fashion according to the influence model .the average influence of on is an unbiased estimate that converges to the expectation on the probabilistic model .the ability to compute influence with respect to an _ arbitrary _ set of propagation instances has significant advantages , as it is useful for instances generated from traces or by more complex models , which exhibit correlations between edges that can not be captured by the simplified ic model .moreover , the average behavior of a probabilistic model on a small set of instances captures its `` typical '' behavior , which is often more relevant than the expected value when the variance is very high .a basic primitive in the study of influence are _ influence queries _ : compute ( or approximate ) the influence of a query set of seed nodes . with binary influence, this amounts to performing graph searches from the seed set in multiple instances .unfortunately , this does not scale well when many queries are posed over graphs with millions of nodes .even more computationally challenging is the fundamental _ influence maximization _ problem , which is finding the most potent seed set of a certain size or cost .the problem was formalized by kempe et al . and inspired by richardson and domingos .kempe et al . showed that , even when the influence function is deterministic ( but the number of seeds is a parameter ) , the problem encodes the classic max cover problem and therefore is np - hard .moreover , an inapproximability result of feige implies that any algorithm that can guarantee a solution that is at least times the optimum is likely to scale poorly with the number of seeds .chen et al . showed that computing the exact influence of a single seed in the binary ic model , even when edge probabilities are , is # p hard . using simulations , the objective studied by kempe et al . is then to find a set of seeds with maximum average influence over a fixed set of propagation instances .a natural heuristic is to use the set of most influential individuals , say those with high degree or centrality , as seeds .this approach , however , can not account for the dependence between seeds , missing the fact that two important nodes may `` cover '' essentially the same communities .kempe et al . proposed a greedy algorithm ( greedy ) instead .it starts with an empty seed set and iteratively adds to the node with maximum _ marginal _ gain in influence ( relative to current seed set ) . since our objective is monotone and submodular , a classical result from nemhauser et al . implies that the influence of the greedy solution with seeds is at least of the best possible for any seed set of the same size . from feige s inapproximability result, this is the best approximation ratio guarantee we can ( asymptotically and realistically ) hope for .greedy has become the gold standard for influence maximization , in terms of the quality of the results .greedy , however , does not scale to modern real - world social networks .the issue is that evaluating the marginal contribution of each node requires a directed reachability computation in each instance ( of which there can be hundreds ) .several performance improvements to greedy have thus been proposed .leskovec et al . proposed celf , which are `` lazy '' evaluations of the marginal contribution , performed only when a node is a candidate for the highest marginal contribution .chen et al . took a different approach , using the reachability sketches of cohen to speed up the reevaluation of the marginal contribution of all nodes .while effective , even with these and other accelerations , the best current implementations of greedy do not scale to networks beyond edges , which are quite small by modern standards . to support massive graphs , several studies proposed algorithms specific to the ic model , which work directly with the edge probabilities instead of with simulations and thus can not be reliably applied to a set of arbitrary instances .borg et al . recently proposed an algorithm based on reverse reachability searches from sampled nodes , similar in spirit to the approach used for reachability sketching .their algorithm provides theoretical guarantees on the approximation quality and has good asymptotic performance , but large `` constants . ''very recently , tang et . developed tim , which engineers the ( mostly theoretical ) algorithm of borgs et al . to obtain a scalable implementation with guarantees .a significant drawback of this approach is that it only works for a pre - specified seed set size , whereas greedy produces a sequence of nodes , with each prefix having an approximation guarantee with respect to the same - size optimum . in applications we are often interested not in a single point , but in a trade - off curve that allows us to find a sweet spot of influence per cost or characterize the network .tim also scales very poorly with the seed set size , and the evaluation only considered seed sets of up to 50 nodes .the degreediscount heuristic refines the natural approach of adding the next highest degree node .mia converts the binary ic sampling probabilities to deterministic edge weights and works essentially with one deterministic instance .irie , by jung et al . , is a heuristic approximation of greedy addition of seed nodes , and has the best performance we are aware of for an algorithm that produces a sequence of seed nodes . in each step , the probability of each node to be covered by the current seed set is estimated using another algorithm ( or simulations ) .they then use eigenvector computations to approximate marginal contributions of all nodes . of those approaches , the irie heuristic scales much better andis much more accurate than other heuristics . in particular, it performs nearly as well as greedy on many research collaboration graphs .[ [ contributions . ] ] contributions .+ + + + + + + + + + + + + + we design a novel sketch - based approach for influence computation which offers scalability with performance guarantees .our main contribution is skim ( sketch - based influence maximization ) , a highly scalable ( approximate ) implementation of the greedy algorithm for influence maximization .we also introduce _ influence oracles _ : after preprocessing that is almost linear , we can answer _ influence queries _ very efficiently , considering only the sketches of the query seed set .we can apply our design on inputs specified as a fixed set of propagation instances , as in kempe et al . , with influence defined as the average over them .we also handle inputs specified as an ic model , where influence is defined as the expectation .our model is defined precisely in section [ model : sec ] .we now provide more details on our design .the exact computation of an influence query requires expensive graph searches from the query seed set on each of instances .the exact greedy algorithm for influence maximization requires a similar computation for each marginal contribution .we address this scalability issue by working with sketches .the core of our approach are per - node summary structures which we call _ combined reachability sketches_. the sketch of a node compactly represents its influence `` coverage '' across instances ; we call this its _ combined reachability set_. the combined reachability sketch of a node , precisely defined in section [ sketch : sec ] , is the bottom- min - hash sketch of the combined reachability set of the node .this generalizes the reachability sketches of cohen , which are defined for a single instance .the parameter is a small constant that determines the tradeoff between computation and accuracy .bottom- sketches of sets support cardinality estimation , which means that we can estimate the influence ( over all instances ) of a node or of a set of nodes from their combined reachability sketches .the estimate has a small relative error and good concentration .our use of combination sketches and state - of - the - art optimal estimators is key to obtaining the best balance between sketch size and accuracy .our skim algorithm for influence maximization is presented in section [ binaryim : sec ] .it scales by running the greedy algorithm in `` sketch space , '' always taking a node with the maximum _ estimated _ ( rather than exact ) marginal contribution .skim computes combined reachability sketches , but only until the node with the maximum estimated influence is computed .this node is then added to the seed set .we then update the sketches to be with respect to a _residual _ problem in which the node that is selected into the seed set and its `` influence '' are no longer present .skim then resumes the sketch computation , starting with the residual sketches , but ( again ) stopping when a node with maximum estimated influence ( in the current , residual , instance ) is found .a new residual problem is then computed .this process is iterated until the seed set reaches the desired size .since the residual problem becomes smaller with iterations , we can compute a very large seed set very efficiently .we also prove that the total overhead of the updates required to maintain the residual sketches is small .in particular , for a set of _ arbitrary _ instances , the algorithm can be run to exhaustion , producing a full permutation of the nodes in } |g^{(i)}|+m \epsilon^{-2}\log^2 n) ] ) of the number of nodes reachable from at least one node in .the combined reachability sketch of a node captures its reachability information _ across _ instances .the sketches we use are the bottom- min - hash sketches of the combined reachability sets : we associate with each node - instance pair an independent random rank value ] is the uniform distribution on ] , we can work with integral permutation ranks with respect to a permutation on the node - instance pairs .we can also structure the permutation so that each sequence in positions to for integral has each node appear in exactly one pair .the associated instance with a node in chunk is randomly selected from instances for which the pair does not have a permutation rank of or less ( independently for each node ) .one can show that this can only improve estimation accuracy .only the first positions can be included in combined reachability sketches of nodes .when estimating influence , we can convert permutation ranks to random ranks using the exponential distribution .we can also estimate cardinality of a subset of the elements directly from permutation ranks ] . to support correct and efficient updates of the sketches , we maintain an inverted index ] , the set of seed nodes , has .it is not hard to show that the influence of a node in the residual problem of iteration is equal to its marginal influence with respect to in the original problem .therefore , , which is the influence of in the residual problem of iteration , is the marginal influence of , with respect to in the original problem .thus , by definition , for all ] .we also show that the partial sketches correctly capture a component of the sketches computed for the residual problem : at the end of an iteration selecting , each updated partial sketch is equal to the set of entries of the combined reachability sketch of in the residual problem that have rank value at most .the content of each sketch before computing the residual is clearly a superset of all reachable node - instance pairs with rank in the residual problem .we can then verify that entries are removed from only and for all covered node - instance pairs with .we now analyze the running time of skim .all updates of the residual problem together take time linear in the size of , since nodes and edges that are covered by the current seed set are removed once visited and never considered again .the remaining component of the computation is determined by the number of times ranks are inserted ( and removed ) from sketches . inserting a value to a scan of all ( remaining ) incoming edges to in an instance .removals of ranks can be charged to insertions .so we need to bound the total number of rank insertions : the expected total number of rank insertions at a particular node is . consider a sketch .we can show , viewing the sketches as uniform samples of reaching pairs , that each rank value removal corresponds to cardinality and hence influence ( marginal gain)being reduced in expectation by a factor of .the initial influence is at most , so there are at most insertions until the marginal influence is reduced below , at which point we do not need to consider the node . the running time is dominated by the sum over nodes , of the number of times a rankis inserted to the sketch of , times the in - degree of ( the maximum over instances ) . from the lemma, we obtain a bound of on the total number of insertions .thus , we obtain a bound of on the running time of the algorithm . to obtain an approximation that is within with good probability, we can choose a fixed , for some constant .the relative error of each influence estimate of a node in an iteration is at most with probability of at least .since we use polynomially many estimates ( maximize influence among nodes in each of at most iterations ) , all estimates are within a relative error of with probability that is polynomially close to .lastly , we bound the approximation ratio of the `` approximate '' greedy algorithm we work with , which uses seeds with close to maximum instead of maximum marginal gain : [ approxgreedy : lemma ] with any submodular and monotone objective function , approximate greedy , which iteratively chooses a node with marginal gain that is at least of the maximum , has an approximation ratio of at least .the same claim holds in expectation when the selection is well concentrated , that is , its probability of being below times the maximum decreases exponentially with .the argument extends the analysis of exact greedy by nemhauser et al . . for any , and after selecting any set of seeds , the maximum marginal gain by adding a single node is always at least of the maximum possible gain for nodes .when using the approximation , this is at least of the maximum possible gain .therefore , after approximate greedy selection of nodes , the influence is at least using the first order term of the taylor expansion .this worst - case analysis is too pessimistic , both for the approximation ratio and running time . in our experiments , we tested skim with a fixed , and observed that the computed seed sets had influence that is much closer to the exact greedy selection than indicated by the worst - case bounds .the explanation is that the influence distribution on real inputs is heavy - tailed , with the vast majority of nodes having a much smaller influence than the one of maximum influence .one factor of in the worst - case running time is due to a `` union bound '' ensuring a relative error of for all nodes in all iterations , with high probability . with a heavy tail distribution, we can identify the maximum with a small error if we ensure a small error only on the few nodes that have influence close to the maximum .furthermore , when the maximum influence is separated out from other influence values , our approximate maximum is more likely to be the node with actual maximum influence .moreover , the estimation error over iterations averages out , so as the seed set gets larger we can work with lower accuracy and still guarantee good approximation .we propose incorporating error estimation that is _ adaptive _ rather than worst - case .this facilitates tighter confidence bounds on the estimation quality of our output .it also allows us to adjust the sketch parameter during computation in order to meet pre - specified accuracy and confidence levels .let the _ discrepancy _ in an iteration be the gap between the actual maximum and the marginal influence of the selected seed .we will bound the sum of discrepancies across iterations by maintaining a confidence distribution on this sum .the estimation uses two components .( i ) the exact marginal influence of the selected node in each iteration , as well as the sum , which is the influence of our seed set .the value is computed when generating the residual problem .( ii ) noting in each iteration the size of the second largest sketch ( excluding the last processed rank ) . intuitively , if the second largest sketch is much smaller than the first one , it is more likely that the first one is the actual maximum .we bound the discrepancy in a single iteration using chernoff bounds .the probability that the sum of independent bernoulli trials falls below its expectation by more than is < \left ( \frac{\exp(-\nu)}{(1-\nu)^{(1-\nu ) } } \right)^\mu.\ ] ] we use this to bound the probability that the discrepancy exceeds , where is the exact marginal gain of our selected seed node .we consider the second largest sketch size , ( the last rank of is not considered part of the sketch even if included ) .we use , , and in equation to obtain a confidence level . finally , to maintain an upper bound on the confidence - error distribution of the sum of discrepancies , we take a convolution , after each iteration , of the current distribution with the distribution of the current iteration .skim can be adapted for higher concurrency by running the sketch - building phases in batches of ranks .we can also adapt it to process inputs presented as an ic model instead of as a set of instances .this yields a more efficient implementation than when generating a set of instances using simulations and running skim on them . in ic - model skim ,the residual problem is a collection of partial models and sketch building is performed on the probabilistic model .we omit details due to space limitations .we now present an accurate and efficient oracle for binary influence , which is based on precomputing a combined reachability sketch ( as defined in section [ sketch : sec ] ) for each node .we preprocess a set of instances using computation and working storage of per node .the preprocessing generates combined reachability sketches of size for each node .[ binaryinforacles : thm ] given a set of combined reachability sketches for with parameter , influence queries for a set of nodes can be estimated in time from the sketches .the estimate is nonnegative and unbiased , has cv at least , and is well concentrated , meaning that the probability that the relative error exceeds decreases exponentially with .we next present the two components of our oracle : estimating the influence of from the sketches of the nodes in and efficiently computing all combined reachability sketches .we show how to use the combined reachability sketches of a set of nodes to estimate the influence of , as given in equation . in graph terms, this means estimating the cardinality of the union from the sketches , with .the influence is the union cardinality divided by the number of instances and , accordingly , is estimated using .our estimators use the threshold rank of each node ; see equation . from the bottom- sketches of each set for can unbiasedly estimate the cardinality of the union .one way to do this is to compute the bottom- sketch of the union , which has threshold value and apply the cardinality estimator .this would already conclude the proof of theorem [ binaryinforacles : thm ] . in our implementation , we use a strictly better union cardinality estimator that uses all the ( at most ) values in the set of sketches instead of just the smallest : this estimator , proposed by cohen and kaplan , can be computed from the sketches in time , by first sorting the sketches by decreasing threshold , and then identifying for each distinct rank value the threshold of the first sketch that contains it . when the sets are all the same , the estimate is the same as applying an estimator to the bottom- sketch on the union , but equation can have up to a factor of lower cv when the sets are sufficiently disjoint . moreover, this estimator is an optimal sum estimator in that it minimizes variance given the information available in the sketches .we can also derive a permutation version of equation .the simplest way is to treat the permutation rank as a uniform rank which is the probability that the rank of another node is smaller than .when there is a single instance , the combined sketches are simply reachability sketches .reachability sketches for all nodes can be computed very efficiently , using at most edge traversals in total , where is the number of edges .shuffle the node - instance pairs algorithm [ reachsketch1:alg ] computes combined sketches by applying the pruned searches algorithm of cohen on each instance , obtaining a sketch for each node , and combining the results .the combined sketch is obtained by taking the bottom- values in the union of the sketches , defined as the algorithm runs in time . rather than storing all sets of sketches , we can compute and merge concurrently or sequentially , but after each step , take the bottom- values in the current bottom- set and the newly computed sketch for instance : .therefore , the additional run time storage requirement for sketches is .this gives us the worst - case bounds on the computation stated in theorem [ binaryinforacles : thm ] .we implemented our algorithms in c++ using visual studio 2013 with full optimization .all experiments were run on a machine with two intel xeon e5 - 2690 cpus and 384gib of ddr3 - 1066 ram , running windows 2008r2 server .each cpu has 8 cores ( 2.90ghz , 8 l1 , 8 , and 20mib l3 cache ) , but all runs are sequential for consistency .we ran our experiments on benchmark networks available as part of the snap and webgraph projects .more specifically , we test _ social _ ( , , , , , , , ) , _ collaboration _ ( ) , and web ( , ) networks . is obtained from by reversing all arcs ( influence follows the reverse direction of links ) .kempe et al . proposed two natural ways of associating probabilities with edges in the binary ic model : the _ uniform _ scheme assigns a constant probability to each directed edge ( they used and ) , whereas in the _ weighted cascade _ ( wc ) scheme the probability is the inverse of the degree of the head node ( making the probability that a node is influenced less dependent on its number of neighbors ) .we consider the wc scheme by default , but we will also experiment with the uniform scheme ( un ) .these two schemes are the most commonly tested in previous studies of scalability . [ cols= " < , > , > , > , > , > , > , > , > , > , > , > " , ] figure [ fig : oracleerror ] shows in detail how the error of the estimator ( axis ) decreases when the seed set size increases ( axis ) . to better evaluate the performance of estimating the _ union _ of several reachability sets , we use the following _ neighborhood generator _ for queries: for each query , it first picks a node at random with probability proportional to its degree . from it exhaustively grows a bfs of the smallest depth such that the tree contains at least nodes .the nodes for the seed set are then uniformly sampled from this tree . with this generator, we expect the reachability sets of seed nodes to highly overlap .looking at the figure , we observe that the estimation error of our oracle decreases rapidly for increasing .also , running queries from the neighborhood generator ( right ) compared to the uniform one ( left ) , has almost no effect on the estimation error ; for 50 seed nodes it is even better on many instances ., title="fig:"],title="fig : " ] finally , figure [ fig : ell ] ( right ) reports the performance of the oracle for fixed instances on the general ic model .we vary the number of instances generated by simulations when building the oracle , but compute the error on a different set of 8192 instances . since our oracle implementation is optimized for fixed instances , we see a higher error with .we can also see that the error decreases with the number of simulations .we conclude that for an ic model oracle , it is beneficial to construct sketches that have approximation guarantees with respect to the ic model itself ( cf .section [ icsketches : sec ] ) rather than work with simulations .we presented highly scalable algorithms for binary influence computation .skim is a sketch - space implementation of the greedy influence maximization algorithm that scales it by several orders of magnitude , to graphs with billions of edges .skim computes a sequence of nodes such that each prefix has a probabilistic guarantee on approximation quality that is close to that of greedy .we also presented sketch - based influence oracles , which after a near - linear processing of the instances can estimate influence queries in time proportional to the number of seeds .our experimental study focused on instances generated by an ic model , since the fastest algorithms we compared with only apply in this model .our experiments revealed that skim is accurate and faster than other algorithms by one to two order of magnitude . in future work , we plan to develop a skim - like algorithm for _ timed influence _ , where edges have lengths that are interpreted as transition times and we consider both the speed and scope of infection .we also plan to use sketches to efficiently estimate the jaccard similarity of the influence sets of two nodes , which we believe to be an effective similarity measure . | propagation of contagion through networks is a fundamental process . it is used to model the spread of information , influence , or a viral infection . diffusion patterns can be specified by a probabilistic model , such as independent cascade ( ic ) , or captured by a set of representative traces . basic computational problems in the study of diffusion are _ influence queries _ ( determining the potency of a specified _ seed set _ of nodes ) and _ influence maximization _ ( identifying the most influential seed set of a given size ) . answering each influence query involves many edge traversals , and does not scale when there are many queries on very large graphs . the gold standard for influence maximization is the greedy algorithm , which iteratively adds to the seed set a node maximizing the marginal gain in influence . greedy has a guaranteed approximation ratio of at least and actually produces a sequence of nodes , with each prefix having approximation guarantee with respect to the same - size optimum . since greedy does not scale well beyond a few million edges , for larger inputs one must currently use either heuristics or alternative algorithms designed for a pre - specified small seed set size . we develop a novel sketch - based design for influence computation . our greedy sketch - based influence maximization ( skim ) algorithm scales to graphs with billions of edges , with one to two orders of magnitude speedup over the best greedy methods . it still has a guaranteed approximation ratio , and in practice its quality nearly matches that of exact greedy . we also present _ influence oracles _ , which use linear - time preprocessing to generate a small sketch for each node , allowing the influence of any seed set to be quickly answered from the sketches of its nodes . |
storing and querying two - dimensional points sets is fundamental in computational geometry , geographic information systems , graphics , and many other fields .most researchers have aimed at designing data structures whose size , measured in machine words , is linear in the number of points .that is , data structures are considered small if they store a set of points on a grid in words of bits each .using bits is within a constant factor of optimality when the points are distributed sparsely and randomly over the grid , but we can often do better on real - word point sets because they tend to be clustered and , therefore , compressible .quadtrees tend to have nodes when the points are clustered , but pointer - based quadtree data structures can still take bits .one way to avoid storing pointers is to store the points coordinates instead , but that also takes bits .hudson gave a structure that uses bits when the points are spaced appropriately and we are willing to tolerate some distortion of the points positions .recently , de bernardo et al . and venkat and mount independently proposed similar structures based on static and dynamic succinct tree representations , respectively ( see , e.g. , ) .both structures use bits per node in the quadtree and have the same asymptotic query times as traditional structures , which support only edge - by - edge navigation .venkat and mount noted , however , that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` a method for compressing paths or moving over multiple edges at once using a succinct structure may speed up the many algorithms that rely on traversal of the quadtree . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in section [ sec : space ] we review the ideas behind quadtrees and prove a simple upper bound on the number of nodes in terms of the points clustering . in section [ sec : structure ]we describe a quadtree data structure that uses bits per node in the quadtree and allows us to move over multiple edges at once . in section[ sec : membership ] we show how this lets us perform faster membership queries .finally , in section [ sec : experiments ] we present experimental evidence that our structure is practical .we leave as future work making our structure dynamic .let be a set of points on a grid .if is 0 or points then the quadtree for the grid is only a root storing either 0 or 1 , respectively ; otherwise , the root stores 1 and has four children , which are the quadtrees of the grid s four quadrants . figure [ fig : tree ] shows an example , taken from .notice the order of the quadrants is top - left , top - right , bottom - left , bottom - right , instead of the counterclockwise order customary in mathematics .this is called the morton or z - ordering and it is useful because , assuming is a power of 2 and the origin is at the top right without loss of generality , since we can manipulate the coordinate system to make it so the obvious binary encoding of a root - to - leaf path is the interleaving of the binary representations of the corresponding point s - and -coordinates. grid ( left ) ; the quadtree for those points ( right ) .the heavy lines in the quadtree indicate the path to the leaf corresponding to the shaded point on the grid.,scaledwidth=90.0% ] for example , if we imagine the edges descending from each internal node in figure [ fig : tree ] are labelled from left to right , then the thick edges are labelled ; the obvious binary encoding for this path is .the coordinates for the shaded point , which corresponds to the leaf at the end of this path , are , so interleaving the binary representations 1001 and _ 0110 _ of its - and -coordinates also gives .we can interleave a point s coordinates in time on a ram ; this operation is also fast in practice on real machines , e.g. , using pre - computed tables .the quadtree has height at most and nodes , and a subtree rooted at depth encodes the points on a square .given a query rectangle , we can find all the points in by starting at the root and visiting all the nodes whose subtrees squares overlap , recording the leaves storing 1s .this is called range reporting or , in the special case , a membership test for .if we report points then we visit nodes .if the points in are clustered , however , then intuitively the root - to - leaf paths in the quadtree will share many nodes and we will use less space and time . [thm : space ] suppose we can partition into clusters , not necessarily disjoint , with points and diameters .then the quadtree has nodes .let be an square on the grid , let , and let be the set of ancestors in the quadtree of the points in .( for simplicity , we identify points in with their corresponding leaves in the quadtree . ) let be the ancestors of only the corners of ( which may or may not be in ) .notice for any ancestor of a point in that has depth at most , s subtree contains all the points in a square of size at least .therefore , the square must contain at least one corner of , so .it follows that so . the proof above is something like a two - dimensional analogue of gupta , hon , shah and vitter s analysis of tries . in the full version of this paperwe will consider higher dimensions and give bounds with respect to hierarchical clustering .the structure by de bernardo et al . mentioned in section [ sec : introduction ] is a variation of brisaboa , ladra and navarro s -tree structure .brisaboa , ladra and navarro designed -trees to compress the web graph , and de bernardo et al . adapted it to other domains , such as geographic data .the main difference is that if contains all the points encoded by the subtree of a node at depth , then in de bernardo et al.s structure that subtree is only the node itself , conforming to the definition of a quadtree ; in brisaboa , ladra and navarro s structure , that subtree has height and leaves .thus , the original -tree can have more nodes than the quadtree . for many applications it is rare that point sets contain large squares that are completely filled .therefore , in this version of this paper we make the simplifying assumption that each internal node of the quadtree has at least one descendant storing a 0 , so both versions of the -tree have the same number of nodes as the quadtree. we will remove this assumption in the full version . to store a quadtree, we first replace each internal node by a binary tree of height 2 and remove any node that has no descendant storing a 1 ; this increases the size of the whole tree by a factor of at most .let be the resulting binary tree .in addition to simplifying our construction , this modification makes quadtrees more practical in higher dimensions ( see ) , which we will also consider in the full version of this paper .we then perform a heavy - path decomposition of .that is , we partition into root - to - leaf paths , called heavy paths , such that the path containing a node also contains the child of with the most leaf descendants ( breaking ties arbitrarily ) .one well - known property of this decomposition is that each root - to - leaf path in consists of initial segments of heavy paths .figure [ fig : decomposition ] shows the heavy - path decomposition of the binary tree for our example from figure [ fig : tree ] . .nodes storing 1s are black ; nodes storing 0s are shown hollow , and discarded ; thick edges belong to heavy paths .the numbers below the black leaves indicate our ordering of the paths.,scaledwidth=90.0% ] we encode each heavy path as a binary string whose 0s and 1s indicate which of s nodes are left children and which are right children ( considering the root as a left child , say , for simplicity ) in increasing order by their depths .we sort the encodings into decreasing order by length , breaking ties such that if two paths and have the same length and their topmost nodes are and , then the encodings of and appear in the same order as the encodings of the paths containing the parents of and .( notice and can not have the same parent , since they have the same height and the tree is binary . ) the numbers below the leaves in figure [ fig : decomposition ] indicate how we order the paths in our example .we store the concatenation of the paths encodings , which consists of bits .we say the bit ] indicates whether is a left child or a right child . for each depth ( considering the root to have depth 0 and leaves to have depth ) , we store a bitvector with 1s indicating which nodes at that depth in have two children ; see , e.g. , for a discussion of bitvectors .these bitvectors have as many bits as there are internal nodes in . for our example, dashes and spaces are only for legibility . storing andall the takes bits per node in and , therefore , also bits per node in the original quadtree . for each length, we also store the starting position in of the first encoding with that length ; this also takes a total of bits . in the full version of this paperwe will give more details and discuss ways in which we can slightly reduce our space usage .suppose ] , and that bit tells us which child is which .reversing the example in the previous paragraph , if is the node at depth 4 in in the fourth path in our ordering , then it is the third node at depth 4 which indicates has two children .therefore , one of s children is the node at depth 5 in the same path , while s other child is the top node in the third path starting at depth 5 , which is the ninth path in our ordering .suppose we want to perform a membership query for . we compute the label on the path to in time , as described in section [ sec : introduction ] .we set to be the root of , then repeat the following steps until we reach or can descend no further : we find the longest common prefix of the remainder of the path label for and the encoding of the heavy path starting at ( except that we ignore the first bit of , which is 0 and corresponds to the root ) , which takes time because the path label and the encoding are bits ; we descend the initial segment of the heavy path encoded by that common prefix ; if we reach , then we report ; if the node we are currently visiting has only one child , then we report ; otherwise , we set to be the child of the node we are currently visiting that is not in the same heavy path , and continue . in total , the query takes time proportional to the number of initial segments we traverse .since we only descend , we never need select queries . to perform a membership query for in our example, we compute the path label 10010110 and set to be the root ; we find that this label does not share any non - empty prefix with the encoding of the heavy path starting at ( ignoring the leading 0 because is the root , although in this case it makes no difference ) ; and we set to be the root s right child .we then find that the remainder of the path label ( which is all of it ) shares a prefix of length 6 , 100101 , with the encoding of the heavy path starting at ; we descend to the 6th node on that path ; we set to be that node s right child .finally , we find the remainder of the path label , 10 , shares a prefix of length 2 , all of 10 , with the encoding of the heavy path starting at ; we descend to the 2nd node on that path ; and we report .since each root - to - leaf path in consists of initial segments of heavy paths , our data structure obviously supports membership queries in time .if the query point is isolated , we use even less time .[ thm : membership ] we can store using bits per node in the quadtree such that a membership query for takes time , where is the number of points in within distance of . any node at depth at least in whose subtree ssquare contains , has at most leaf descendants .it follows that the path from to the deepest node of whose subtree s square contains , consists of initial segments of heavy paths . to see why ,consider that if we ascend from to , every time we move from the top - most node in one heavy path to its parent in another heavy path , the number of leaf descendants in the subtree below us at least doubles .since the path from the root to has length , the path from the root to consists of initial segments of heavy paths .we note that combining theorems [ thm : space ] and [ thm : membership ] suggests our structure should be particularly suited to applications in which , e.g. , points are highly clustered but queries are chosen according to a very different distribution .[ cor : combination ] suppose we can partition a set of points into clusters , not necessarily disjoint , with points and diameters .then we can store in bits such that a membership query for takes time , where is the number of points in within distance of .we have implemented the data structure from section [ sec : structure ] and compared it experimentally with the -tree . due to space constraints ,we present the results only for membership queries ; in the full paper we will present results for range reporting .all the experiments presented here were performed in an intel core i7 - 3820.60ghz , 32 gb ram , running ubuntu server ( kernel 3.13.0 - 35 ) .we compiled with gnu / g++ version 4.6.3 using -o3 directive . in our implementationwe make use of the bitvector implementations available in libcds ( https://github.com/fclaude/libcds ) .specifically , we use three types of bitvectors .our first variant , a heavy - path implementation with plain bitvectors , we call hp ; for our second variant , named hp , we use compressed bitmaps .libcds provides several implementations of compressed bitmaps , and we use either the raman , raman and rao ( rrr ) implementation or sadakane s sdarray ( for each dataset , we select the one achieving better compression ) .we compare these two variants with two configurations of -trees .the first variant , named -tree , consists of a basic version of the -tree where the degree for all the levels of the tree .the second variant , named -tree , is the configuration considered as optimal for the -tree , which includes a hybrid approach with different values for the levels of the tree and a vocabulary of leaf submatrices to obtain better compression .we did not compare to venkat and mount s structure because they have not yet made an implementation available .we also did not compare to the classical quadtree representations since the -tree is an order of magnitude smaller and has better access time . for our experimental evaluation we use grid datasets from different domains : geographic information systems ( gis ) , social networks ( sn ) , web graphs ( web ) and rdf datasets ( rdf ) .for gis data we use the geonames dataset , which contains more than 9 million populated places , and convert it into three grids with different resolutions : geo - sparse , geo - med , and geo - dense .( the higher the resolution , the sparser the matrix . ) for sn and web we consider the grid associated with the adjacency matrix of two web graphs ( indochina-2004 , uk-2002 ) and two social networks ( dblp-2011 , enwiki-2013 ) obtained from the laboratory for web algorithmics .finally , we use rdf data obtained from the dbpedia dataset .this rdf dataset contains triples ( s , p , o ) indicating subjects that are related to objects with a specific predicate .thus , each predicate defines a binary relation among subjects and objects that can be represented in a grid with points .we create three different grids for our experiments , selecting predicates with different numbers of related objects ( triples - sparse , triples - med , and triples - dense ) .table [ tab : space ] gives the main characteristics of the datasets used : name of the dataset , size of the grid ( ) , number of points it contains ( ) and the space achieved by the four representations compared : -tree , -tree , hp , and hp .the space is measured in bits per points ( bpp ) , dividing the total space of the structure by the number of points ( ) in the grid .we can observe that hp obtains the worst compression among all the alternatives , but hp obtains better results than -tree for some of the datasets , which is remarkable as this configuration of the -tree exploits several compression techniques that may be considered for future extensions of this proposal and may allow the hp variant to reduce its space .hp clearly outperforms -tree for very sparse grids ..description of the datasets and space comparison [ cols= " < , < , > , > , > , > , > , > " , ] [ tab : space ] we now analyze the time performance of our proposed structure .we distinguish three different types of membership queries : empty cells , filled cells and isolated filled cells ( top 100,000 most isolated filled cells ) , and measure average times per query in nanoseconds .we show in figure [ fig : times ] the results obtained by the four representations over two kinds of grid datasets , gis and sn .( due to space constraints we omit results for the web and rdf datasets . )we can observe that for both scenarios we obtain similar performance .the -tree representation obtains better results when querying empty cells , as the computation for reaching a zero node in the -tree is lighter than using heavy paths .however , hp becomes the best alternative when querying filled cells : our non - compressed data structure is always the fastest one for cells with values , and much faster for isolated points . in this latter case ,even the compressed variant of our structure outperforms the most optimized -tree both in time and space .-tree variants , for the gis ( left ) and sn ( right ) datasets and each type of membership query : empty cells ( top ) , filled cells ( center ) and isolated filled cells ( bottom ) .curves for the gis datasets show results for geo - sparse , geo - med , and geo - dense ; curves for the sn datasets show results for dblp-2011 and enwiki-2013 .for example , consider the graph for the gis dataset and queries on isolated filled cells ( bottom left ) : the middle point on the curve for hp is to the left of and below the bend in the curve for -tree ; this means that , for the geo - med dataset , the hp structure uses less space than the -tree structure and also answers faster .( notice the query type does not affect space usages.),title="fig:",scaledwidth=48.0%]-tree variants , for the gis ( left ) and sn ( right ) datasets and each type of membership query : empty cells ( top ) , filled cells ( center ) and isolated filled cells ( bottom ) .curves for the gis datasets show results for geo - sparse , geo - med , and geo - dense ; curves for the sn datasets show results for dblp-2011 and enwiki-2013 .for example , consider the graph for the gis dataset and queries on isolated filled cells ( bottom left ) : the middle point on the curve for hp is to the left of and below the bend in the curve for -tree ; this means that , for the geo - med dataset , the hp structure uses less space than the -tree structure and also answers faster .( notice the query type does not affect space usages.),title="fig:",scaledwidth=48.0% ] -tree variants , for the gis ( left ) and sn ( right ) datasets and each type of membership query : empty cells ( top ) , filled cells ( center ) and isolated filled cells ( bottom ) .curves for the gis datasets show results for geo - sparse , geo - med , and geo - dense ; curves for the sn datasets show results for dblp-2011 and enwiki-2013 .for example , consider the graph for the gis dataset and queries on isolated filled cells ( bottom left ) : the middle point on the curve for hp is to the left of and below the bend in the curve for -tree ; this means that , for the geo - med dataset , the hp structure uses less space than the -tree structure and also answers faster .( notice the query type does not affect space usages.),title="fig:",scaledwidth=48.0%]-tree variants , for the gis ( left ) and sn ( right ) datasets and each type of membership query : empty cells ( top ) , filled cells ( center ) and isolated filled cells ( bottom ) .curves for the gis datasets show results for geo - sparse , geo - med , and geo - dense ; curves for the sn datasets show results for dblp-2011 and enwiki-2013 .for example , consider the graph for the gis dataset and queries on isolated filled cells ( bottom left ) : the middle point on the curve for hp is to the left of and below the bend in the curve for -tree ; this means that , for the geo - med dataset , the hp structure uses less space than the -tree structure and also answers faster .( notice the query type does not affect space usages.),title="fig:",scaledwidth=48.0% ] -tree variants , for the gis ( left ) and sn ( right ) datasets and each type of membership query : empty cells ( top ) , filled cells ( center ) and isolated filled cells ( bottom ) .curves for the gis datasets show results for geo - sparse , geo - med , and geo - dense ; curves for the sn datasets show results for dblp-2011 and enwiki-2013 .for example , consider the graph for the gis dataset and queries on isolated filled cells ( bottom left ) : the middle point on the curve for hp is to the left of and below the bend in the curve for -tree ; this means that , for the geo - med dataset , the hp structure uses less space than the -tree structure and also answers faster .( notice the query type does not affect space usages.),title="fig:",scaledwidth=48.0%]-tree variants , for the gis ( left ) and sn ( right ) datasets and each type of membership query : empty cells ( top ) , filled cells ( center ) and isolated filled cells ( bottom ) .curves for the gis datasets show results for geo - sparse , geo - med , and geo - dense ; curves for the sn datasets show results for dblp-2011 and enwiki-2013 .for example , consider the graph for the gis dataset and queries on isolated filled cells ( bottom left ) : the middle point on the curve for hp is to the left of and below the bend in the curve for -tree ; this means that , for the geo - med dataset , the hp structure uses less space than the -tree structure and also answers faster .( notice the query type does not affect space usages.),title="fig:",scaledwidth=48.0% ]we have presented a fast space - efficient representation of quadtrees , answering in the affirmative to the conjecture of venkat and mount .our structure has nice theoretical bounds and it is practical .space requirements are similar to other space - efficient representations of quadtrees , e.g. , the -trees , but our structure is faster handling isolated filled cells . in the full version of this paperwe will generalize our structure to higher dimensions , give bounds in terms of hierarchical clustering and present experimental results for range reporting .many thanks to timothy chan and yakov nekrich for directing us toward venkat and mount s paper .the first author is also grateful to the late ken sevcik for introducing him to some concepts used in this paper .d. arroyuelo , r. cnovas , g. navarro , and k. sadakane .succinct trees in practice . in _ proc .alenex _ , pages 8497 , 2010 .+ n. bereczky , a. duch , k. nmeth , and s. roura .quad - k - d trees . in _ proc .latin _ , pages 743754 , 2014 .+ p. boldi , m. rosa , m. santini , and s. vigna .layered label propagation : a multiresolution coordinate - free ordering for compressing social networks . in _ proc .www _ , pages 587596 , 2011 .+ p. boldi and s. vigna .the webgraph framework i : compression techniques . in _ proc .www _ , pages 595601 , 2004 .+ n. r. brisaboa , s. ladra , and g. navarro . compact representation of web graphs with extended functionality . , 39:152174 , 2014 .+ p. davoodi and s. s. rao .succinct dynamic cardinal trees with constant time operations for small alphabet . in _ proc .tamc _ , pages 195205 , 2011 .+ g. de bernardo , s. lvarez - garca , n. r. brisaboa , g. navarro , and o. pedreira .compact querieable representations of raster data . in _ proc . spire _ , pages 96108 , 2013 . + f. e. fich .constant time operations for words of length w. 1999 . + i. gargantini .an effective way to represent quadtrees .25:905910 , 1982 . + a. gupta , w .- k .hon , r. shah , and j. s. vitter .compressed data structures : dictionaries and data - aware measures ., 387:313331 , 2007 .+ b. hudson .succinct representation of well - spaced point clouds .technical report abs/0909.3137 , corr , 2009 .+ g. m. morton . a computer oriented geodetic data base ; and a new technique in file sequencing .technical report , ibm ltd .+ g. navarro and k. sadakane .fully functional static and dynamic succinct trees . , 10:16 , 2014 .+ m. ptracu . succincter . in _ proc .focs _ , pages 305313 , 2008 .+ d. d. sleator and r. e. tarjan . a data structure for dynamic trees ., 26:362391 , 1983 .+ p. venkat and d. m. mount .a succinct , dynamic data structure for proximity queries on point sets . in _ proc .cccg _ , 2014 .to appear . | real - world point sets tend to be clustered , so using a machine word for each point is wasteful . in this paper we first bound the number of nodes in the quadtree for a point set in terms of the points clustering . we then describe a quadtree data structure that uses bits per node and supports faster queries than previous structures with this property . finally , we present experimental evidence that our structure is practical . |
magnetohydrodynamic ( mhd ) equations can be used in modeling phenomena in a wide range of applications including laboratory , astrophysical , and space plasmas .for example , 3d mhd simulations have been widely adopted in space weather simulations .the historical review and current status of the existing popular 3d mhd models can be found in and , respectively . however , mhd equations form a nonlinear system of hyperbolic conservation laws , which is so complex that high - resolution methods are necessary to solve them in order to capture shock waves and other discontinuities .these high - resolution methods are in general computationally expensive and parallel computational resources such as beowulf clusters or even supercomputers are often utilized to run the codes that implemented these methods . in the last few years , the rapid development of graphics processing units ( gpus ) makes them more powerful in performance and more programmable in functionality . by comparing the computational power of gpus and cpus ,gpus exceed cpus by orders of magnitude .the theoretical peak performance of the current consumer graphics card nvidia geforce gtx 295 ( with two gpus ) is 1788.48 g floating - point operations per second ( flops ) per gpu in single precision while a cpu ( core 2 quad q9650 3.0 ghz ) gives a peak performance of around 96gflops in single precision .the release of the _ compute unified device architecture ( cuda ) _ hardware and software architecture is the culmination of such development .with cuda , one can directly exploit a gpu as a data - parallel computing device by programming with the standard c language and avoid working with a high - level shading language such as cg , which requires a significant amount of graphics specific knowledge and was previously used for performing computation on gpus .detailed performance studies on gpus with cuda can be found in and .cuda is a general purpose parallel computing architecture developed by nvidia .it includes the cuda instruction set architecture ( isa ) and the parallel compute engine .an extension to c programming language and its compiler are provided , making the parallelism and high computational power of gpus can be used not only for rendering and shading , but also for solving many computationally intensive problems in a fraction of the time required on a cpu .cuda also provides basic linear algebra subroutines ( cublas ) and fast fourier transform ( cufft ) libraries to leverage gpus capabilities .these libraries release developers from rebuilding the frequently used basic operations such as matrix multiplication .graphics cards from g8x series support the cuda programming mode ; and the latest generation of nvidia gpus ( gt2x0 series or later ) unifies vertex and fragment processors and provides shared memory for interprocessor communication .a increasing number of new gpu implementations with cuda in different astrophysical simulations have been proposed .et al . _ re - implemented the direct gravitational -body simulations on gpus using cuda . for , they reported a speedup of about 100 compared to the host cpu and about the same speed as the grape-6af .a library ` sapporo ` for performing high precision gravitational -body simulations was developed on gpus by gaburov _et al . _this library achieved twice as fast as commonly used grape6a/ grape6-blx cards .et al . _ implemented a particle - in cell ( pic ) code on gpus for plasmas simulations and visualizations and demonstrated a speedup of 11 - 22 for different grid sizes .sainio presented an accelerated gpu cosmological lattice program for solving the evolution of interacting scalar fields in an expanding universe , achieving speedups between one and two orders of magnitude in single precision . in the above works , no discussion on using double precision on gpuswas reported . in mhd simulations ,the support of double precision is important , especially for nonlinear problems .we will evaluate the performance and accuracy of double precision on gpus in this work .in this paper , we present an efficient implementation to accelerate computation of mhd simulations on gpus , called _ gpu - mhd_. to our knowledge , this is the first work describing mhd simulations on gpus in detail . the goal of our work is to perform a pilot study on numerically solving the ideal mhd equations on gpus .in addition , the trend of today s chip design is moving to streaming and massively parallel processor models , developing new mhd codes to exploit such architecture is essential . _gpu - mhd _ can be easily ported to other many - core platforms such as intel s upcoming larrabee , making it more flexible for the user s choice of hardware .this paper is organized as follows : a brief description of the cuda programming model is given in section 2 . the numerical scheme in which _ gpu - mhd _ adopted is presented in section 3 . in section 4 , we present the gpu implementation in detail .numerical tests are given in section 5 .accuracy evaluation by comparing single and double precision computation results is given in section 6 .performance measurements are reported in section 7 and visualization of the simulation results is described in section 8 .we conclude our work and point out the future work in section 9 .the compute unified device architecture ( cuda ) was introduced by nvidia as a general purpose parallel computing architecture , which includes gpu hardware architecture as well as software components ( cuda compiler and the system drivers and libraries ) .the cuda programming model consists of functions , called _kernels _ , which can be executed simultaneously by a large number of lightweight _ threads _ on the gpu .these threads are grouped into one- , two- , or three - dimensional _ thread blocks _ , which are further organized into one- or two - dimensional _grids_. only threads in the same block can share data and synchronize with each other during execution .thread blocks are independent of each other and can be executed in any other . a graphics card that supports cuda , for example , the gt200 gpu , consisting of 30 streaming multiprocessors ( sms ) .each multiprocessor consists of 8 streaming processors ( sps ) , providing a total of 240 sps .threads are grouped into batches of 32 called _ warps _ which are executed in single instruction multiple data ( simd ) fashion independently .threads within a warp execute a common instruction at a time . for memory access and usage ,there are four types of memory , namely , _ global memory _ , _ constant memory _ , _ texture memory _ as well as _shared memory_. global memory has a separate address space for obtaining data from the host cpu s main memory through the pcie bus , which is about 8 gb / sec in the gt200 gpu .any valued stored in global memory can be accessed by all sms via load and store instructions .constant memory and texture memory are cached , read - only and shared between sps .constants that are kept unchanged during kernel execution may be stored in constant memory .built - in linear interpolation is available in texture memory . shared memory is limited ( 16 kb for gt200 gpu ) and shared between all sps in a mp . for detailed information concerning memory optimizations , we refer the reader to cuda best practice guide " .double precision is one important concern in many computational physics applications , however , support of double precision is limited to the nivdia cards having compute capability 1.3 ( see appendix a in ) such as the gtx 260 , gtx 280 , quadro fx 5800 ( contains one gt200 gpu ) , and tesla c1060 ( contains one gt200 gpu ) and s1070 ( contains four gt200 gpus ) . in gt200 gpu , there are eight single precision floating point ( fp32 ) arithmetic logic units ( alus ) ( one per sp ) in sm , but only one double precision floating point ( fp64 ) alu ( shared by eight sps ) .the theoretical peak performance of gt200 gpu is 936 gflops in single precision and 78 gflops in double precision . in cuda, double precision is disabled by default , ensuring that all double numbers are silently converted into float numbers inside kernels and any double precision calculations computed are incorrect . in order to use double precision floating point numbers ,we need to call ` nvcc ` : ` -arch ` `` = ` sm_13 ` " . the flag ` -arch ` `` = ` sm_13 ` " in the command tells `nvcc ` " to use the compute capability 1.3 which means enabling the double precision support .the recent fermi architecture ( gtx 480 , for example ) significantly improves the performance of double precision calculations by introducing better memory access mechanisms . in sections 6 and 7we compare the accuracy and actual performance of _ gpu - mhd _ in single and double precision on both gt200 and fermi architectures .the ideal mhd equations with the assumption of the magnetic permeability can be represented as hyperbolic system of conservation laws as follows here , is the mass density , the momentum density , the magnetic field , and the total energy density .the total pressure where is the gas pressure that satisfies the equation of state , .in addition , the mhd equations should obey the divergence - free constraint . over the last few decades, there has been a dramatic increase in the number of publications on the numerical solution of ideal mhd equations .in particular the development of shock - capturing numerical methods for ideal mhd equations .we do not provide an exhaustive review of the literature here .a comprehensive treatment of numerical solution of mhd equations can be found in , for example .et al . _ proposed a free , fast , simple , and efficient total variation diminishing ( tvd ) mhd code featuring modern high - resolution shock capturing on a regular cartesian grid .this code is second - order accuracy in space and time and enforces the constraint to machine precision and it was successfully used for studying nonradiative accretion onto the supermassive black hole and fast magnetic reconnection . due to these advantages and convenience for gpu verse cpu comparison ,the underlying numerical scheme in _ gpu - mhd _ is based on this work . a detailed comparison of shock capturing mhd codescan be found in , for example .we plan to explore other recent high - order godunov schemes such as and for _ gpu - mhd _ as our future work .we briefly review the numerical scheme we adopted in _ gpu - mhd _ here . in this numerical scheme ,the magnetic field is held fixed first and then the fluid variables are updated .a reverse procedure is then performed to complete a one time step .three dimensional problem is split into one - dimensional sub - problems by using a strang - type directional splitting .firstly , we describe the fluid update step in which the fluid variables are updated while holding the magnetic field fixed .the magnetic field is interpolated to cell centers for second - order accuracy . by considering the advection along the direction ,the ideal mhd equations can be written in flux - conservative vector form as where the flux vector is given by equation ( [ eq : euler ] ) is then solved by jin & xin s relaxing tvd method . with this method ,a new variable is defined , where is a free positive function called the _ flux freezing speed_. for ideal mhd equations , we have and equations \frac{\partial { { { \mbox{\boldmath }}}}}{\partial t } + \frac{\partial}{\partial x}(c{{{\mbox{\boldmath } } } } ) & = 0\end{aligned}\ ] ] these equations can be decoupled through a change of left- and right - moving variables and \frac{\partial { { { \mbox{\boldmath }}}}^l}{\partial t } - \frac{\partial}{\partial x}(c{{{\mbox{\boldmath }}}}^l ) & = 0\end{aligned}\ ] ] the above pair of equations is then solved by an upwind scheme , separately for right- and left - moving waves , using cell - centered fluxes. second - order spatial accuracy is achieved by interpolating of fluxes onto cell boundaries using a monotone upwind schemes for conservation laws ( muscl ) with the help of a flux limiter .runge - kutta scheme is used to achieve second - order accuracy of time integration .we denote as the cell - centered values of the cell at time , as the cell - centered flux in cell , as an example , we consider the positive advection velocity , negative direction can be obtained in a similar way .we obtain the first - order upwind flux from the averaged flux in cell .two second - order flux corrections can be defined using three local cell - centered fluxes as follows \delta{{\mbox{\boldmath }}}_{n + 1/2}^{r , t } & = \frac{{{\mbox{\boldmath }}}_{n + 1}^t - { { \mbox{\boldmath }}}_n^t}{2}\end{aligned}\ ] ] when the corrections have opposite signs , there is no second - order correction in the case of near extrema . with the aid of a flux limiter then get the second - order correction the van leer limiter is used in _ gpu - mhd_. by adding the second - order correction to the first - order fluxes we obtain second - order fluxes .for example , the second - order accurate right - moving flux can be calculated the time integration is performed by calculating the fluxes and the freezing speed in the first half time step is given as follows where is computed by the first - order upwind scheme . by using the second - order tvd scheme on , we obtain the full time step to keep the tvd condition , the flux freezing speed is the maximum speed information can travel and should be set to as the maximum speed of the fast mhd wave over all directions is chosen . as the time integrationis implemented using a second - order runge - kutta scheme , the time step is determined by satisfying the cfl condition \\ \delta t & = cfl / c_{max } \end{aligned}\ ] ] where is the courant - number and is generally set to for stability , and is the magnitude of the magnetic field . constrained transport ( ct ) is used to keep the to machine precision .therefore , the magnetic field is defined on cell faces and it is represented in arrays where the cell centers are denoted by , and faces by , , and , etc .the cells have unit width for convenience .secondly , we describe the update of the magnetic field in separate two - dimensional advection - constraint steps along -direction while holding the fluid variables fixed .the magnetic field updates along and -directions can be handled in a similar matter .we follow the expressions used in .for example , we can calculate the averaging of along direction as follows \ ] ] a first - order accurate flux is then obtained by where the velocity average is \ ] ] is updated by constructing a second - order - accurate upwind electromotive force ( emf ) using jin & xin s relaxing tvd method in the advection step .then this same emf is immediately used to update in the constraint step .extension to three dimensions can be done through a strang - type directional splitting .equation ( [ eq : euler ] ) is dimensionally split into three separate one - dimensional equations . for a time step , let be the fluid update along , be the update of along , and be the update operator of to by including the flux along direction .each includes three update operations in sequence , for example , includes , , and . a forward sweep and a reverse sweepare defined as and , respectively .a complete update combines a forward sweep and reverse sweep .the dimensional splitting of the relaxing tvd can be expressed as follows { { \mbox{\boldmath }}}^{t_3 } & = { { \mbox{\boldmath }}}^{t_2 + 2\delta t_2 } = l_zl_xl_yl_yl_xl_z{{\mbox{\boldmath }}}^{t_2}\\[8pt ] { { \mbox{\boldmath }}}^{t_4 } & = { { \mbox{\boldmath }}}^{t_3 + 2\delta t_3 } = l_yl_zl_xl_xl_zl_y{{\mbox{\boldmath }}}^{t_3}\end{aligned}\ ] ] where , , and are sequential time steps after each double sweep . for cartesian coordinate system , it is easy to apply strang - type directional splitting on a high - dimensional problem and split it into one - dimensional sub - problems in cartesian coordinate system . in principle , we can also apply directional splitting for cylindrical or spherical coordinate systems .we may need to split the edges of grid in any direction into equal - distance pieces and determine the positions of the cell centers and face centers .similar techniques from li and li can be utilized to extend the usage of directional splitting for cylindrical or spherical coordinate systems .this extension will be left as our future work .in this section , we provide the implementation details of _ gpu - mhd_. with _ gpu - mhd _ , all computations are performed entirely on gpus and all data is stored in the gram of the graphics card .currently , _gpu - mhd _ works on a regular cartesian grid and supports both single and double precision modes .considering the rapid development of graphics hardware , our gpu implementation was design general enough for gt200 architecture ( gtx 295 in our study ) and fermi architecture ( gtx 480 in our study ) .therefore , _ gpu - mhd _ can be used on newer architectures without significant modification .before we explain our gpu implementation in detail , the consideration and strategy of our design is presented first . during the computational process ,the tvd numerical scheme for solving the mhd equations will generate many intermediate results such as the flux " and some interpolated values of each grid point .these intermediate results will then be used in the next calculation step .one important thing is not only these intermediate results of the current grid point but also those of the neighboring grid points are needed to be stored .this means the intermediate results of the neighboring grid points have to be calculated before going to the next calculation step . as a result ,each calculation step in the algorithm was designed with one or several kernels and huge amount of data should be stored . in order to avoid the data transmission between cpu and gpu during the computation , _ gpu - mhd _ was designed to be run entirely on gpus . to reduce the memory usage , the storage for the intermediate results will be reused to store the intermediate results generated by the next step .the eight components ( ) for solving the mhd equations are stored in the corresponding eight arrays .each component of a grid point is stored close to the same component of the neighboring gird points . in any calculation step ,only the necessary component of a calculation ( kernel ) will be accessed , thus providing more effective i / o . the strategy of our design is summarized as follows : * each step of the numerical scheme is handled with one or several kernels to exploit the parallelism of gpus ; + * storage of the intermediate results are reused to reduce memory usage ; + * components of the mhd equations are stored in separate arrays to provide effective memory access .although shared memory provides much faster access rate than global memory , its size is very limited ( 16 kb in gtx 295 and 48 kb in gtx 480 ) .as we have to process many intermediate results in each calculation step , shared memory is too small to fit in our gpu implementation .of course there are some techniques of using shared memory , the basic idea is to copy the data from global memory to the shared memory first , and then use the data in shared memory to do calculations .after the calculations have been completed , write these results back to global memory .this will benefit those computations that need many data accesses during the calculation period .however , as we mentioned in the beginning of this section , due to the nature of the algorithm , _ gpu - mhd _ was designed with many separated cuda kernels .calculation of each kernel actually is simple and variables of grid points in each kernel are mostly accessed only once ( read ) or twice ( read and then write the result ) . in order to provide fast access speed , parameters and temporary results ( generated and used only within kernel ) in each kernel are stored with registers .the parameters for the whole simulation such as the data size and size of dimensions are stored using constant memory .thus in our case , shared memory does not show its advantages . on the other hand ,the size of shared memory is too small for our problem , especially when double precision is used in the calculations .we did try to use shared memory in _ gpu - mhd _ by coping the capable amount of data to shared memory for the calculations , but there is no speedup compared to our current approach .therefore , our code mainly uses global memory .there are three phases in our code : transform of data from the host memory into the global memory , execution of the kernels , and transfer of data from the gpu into the host memory . for global memory ,if the data is well organized in global memory with the form that a load statement in all threads in a warp accesses data in the same aligned 128-byte block , then the threads can efficiently access data from the global memory .the process of organizing the data in such a form is so called coalescing . actually , gt200 architecture( with compute capability 1.2 or 1.3 ) has more flexible in handling data in global memory than those cards with compute capability 1.1 or lower .coalescing of loading and storing data that are not aligned perfectly to 128-byte boundaries is handled automatically on this architecture ( see appendix g.3.2.2 in ) .we illustrate this new feature in figure [ fig : gtx200_coalescing ] .gt200 architecture supports 32 bytes memory block and has less limitation to memory address , which is accessed by the header ( first ) thread . even without shifting " the address to aligned 64 bytes or 128 bytes, the gpu kernels can still keep good performance , especially when we only process with data .the memory arrangement of _ gpu - mhd _ is presented here . the most intuitive way to write a parallel program to solvea multidimensional problem is to use multidimensional arrays for data storage and multidimensional threads for computation .however , the ability of the current cuda is limited in supporting multidimensional threads , therefore , we could not implement our code in such a straightforward way . especially in three dimensions or higher dimensions , there are still some limitations in handling multidimensional arrays and multidimensional threads . as a result ,the most primitive way is to store data in one - dimension and perform parallel computation with one - dimension threads . by using an indexing technique ,our storage and threading method can be extended to to solve multidimensional problems .our data storage arrangement is expressed in fig .[ fig : matrix ] and in equations ( [ index1 ] ) to ( [ index2 ] ) . } / size_{z } \\= index & \mathbf{mod } & size_{z } \end{array}\right .\ ] ] \label{index2 } index_{y } \pm 1 & = index \pm size_{z}\\[8pt ] index_{z } \pm 1 & = index \pm 1\label{index3}\end{aligned}\ ] ] here , , and are the indexes of a 3d matrix . is the 1d index used in _ gpu - mhd _ , , and are the matrix size ( number of grid points in our study ) of a 3d matrix .equation ( [ index1 ] ) expresses the mapping of three - dimensional ( 3d ) indexes to one - dimensional ( 1d ) indexes .equations ( [ index2 ] ) to ( [ index3 ] ) express the shift operations .shift operations are very important in numerical solution of conservation laws because some calculations are based on the neighboring grid points .the above indexing technique is used to prepare suitable values ( vectors ) as input values for the calculation kernels we implemented in cuda . as an example, we give a conceptual calculation kernel for a calculation in -dimension to show how the indexing technique works for this task in the following .this kernel calculates the result with the grid point itself and neighboring gird points in -dimension .the calculations in - or -dimension have the similar form .calculate_x(data , result ) index = getid ( ) ; //self - increment index for multi - threading grid_point_index = index ; //(x , y , z ) neighbor_1 = grid_point_index + ( sizey * sizez ) ; //(x + 1 , y , z ) neighbor_2 = grid_point_index - ( sizey * sizez ) ; //(x - 1 , y , z ) calculate_kernel(data , result , grid_point_index , neighbor_1 , neighbor_2 , ... ) ; ...... the indexing technique is a common way to represent multidimensional arrays using 1d arrays by mapping a 3d index to an 1d index .the gpu kernels of tvd were designed such that each kernel calculates using the actual index of a particular grid point and its neighbors .for example , if the calculation needs the information in a particular gird point and its neighboring grid points in -dimension , then the indexing operation will retrieve ] and ] , ] .then these resulting indexes from indexing operation will pass to the gpu kernels of tvd for performing the calculation . as a result , for -dimension problem , what we need are -dimension indexing operation kernels while only one tvd kernel is needed at all the time . for efficiency , _gpu - mhd _ only processes the problem with the number of grid points satisfying condition .one reason is that the size of a warp of gpu contains 32 threads , problems with grid point number of are easier to determine the number of threads and blocks to fit in multiple of a warp before the gpu kernel is called .that means we do not need to check if the i d of the grid point being processed ( calculated by the block i d and thread i d ) is out of the range .it is very helpful in making the gpu code run more efficient . on the other hand , it is also effective to reduce logical operations in a gpu kernel , which is known to be a little bit slow in the current gpu architecture . as a result ,warp divergence caused by the number of the data is avoided ( there is still a little bit warp divergence caused by if " operation in the calculation of our algorithm ) .similar method is used in the cuda sdk code sample reduction " .the actual memory pattern used in _ gpu - mhd _ will be presented at the end of next subsection after introducing our algorithm .a cuda kernel " is a function running on gpu . noted that the cuda kernel will process all grid points in parallel , therefore , a ` for ` instruction is not needed for going through all grid points . _gpu - mhd _ includes the following steps : 1 .cuda initialization 2 .setup the initialize condition for the specified mhd problem : + of all grid points , of cell faces , and set parameters such as time , etc .3 . copy the the initialize condition , to device memory ( cuda global memory ) 4 . for all grid points ,calculate the by equation ( [ equat_cfl ] ) ( implemented with a cuda kernel ) 5 .use ` cublasisamax ` ( in single precision mode ) function or ` cublasidamax ` ( in double precision mode ) function of the cublas library to find out the maximum value of all , and then determine the 6 .since the value of is stored in device memory , read it back to host memory ( ram ) 7 .sweeping operations of the relaxing tvd ( calculation of the , implemented with several cuda kernels , will be explained in the next subsection ) 8 . 9 . `if ` reaches the target time , go to next step + ` else ` repeats the procedure from step ( 4 ) 10 . read back data , to host memory 11 .output the result the program flow of _ gpu - mhd _ is shown in fig .[ fig : flowchart ] .after the calculation of the cfl condition , the sweeping operations will be performed .the sweeping operation will update both the fluid variables and orthogonal magnetic fields along dimension .this is a core computation operation in the relaxing tvd scheme described in section [ mhd ] .the cfl condition for the three - dimensional relaxing tvd scheme is obtained by equation ( [ equat_cfl ] ) .the procedure is to calculate all the of each grid point and find out the maximum value . in _ gpu - mhd_ , parallel computation power of cuda is exploited to calculate the of each grid point in parallel and all the values are stored in a matrix .then the ` cublasisamax ` function is used ( in double precision mode , the ` cublasidamax ` function is used ) to find out the maximum of the matrix in parallel ( called the reduction operation ) .the ` cublasisamax ` function is provided in the cublas library a set of basic operations for vector and matrix provided by nvidia with the cuda toolkit .the reason we read the back and store both and in host memory is due to the data in device memory can not be printed out directly in the current cuda version. this information is useful for checking if there is any problem during the simulation processing .the implementation of sweeping operations will be explained in the next subsection .before we start to describe the sweeping operations , consideration of memory arrangement is presented first in the following .implementing parallel computation using cuda kernels is somewhat similar to parallel implementation on a cpu - cluster , but it is not the same .the major concern is the memory constrain in gpus .cuda makes parallel computation process on gpus which can only access their graphics memory ( gram ) .therefore , data must be stored in gram in order to be accessed by gpus .there are several kinds of memory on graphics hardware including registers , local memory , shared memory , and global memory , etc . ,and they have different characteristics and usages , making memory management of cuda quite different compared to parallel computation on a cpu - cluster .in addition , even the size of gram in a graphics card increases rapidly in newer models ( for example , the latest nvidia graphics card geforece gtx 295 has 1.75 g gram ) , but not all the capacity of gram can be used to store data arbitrarily . shared memory and local memory are flexible to use , however , their sizes are very limited in a block and thus they can not be used for storing data with large size . in general , numerical solution of conservation laws will generate many intermediate results ( for example , , , , , etc . ) during the computation process , these results should be stored for subsequent steps in the process .therefore , global memory was mainly used in _ gpu - mhd_. after the maximum value of in equation ( [ equat_cfl ] ) is found , we can get the by determining the courant - number ( ) .the sequential step is the calculation of ( ) .the implementation of includes two parts : update the fluid variables and update the orthogonal magnetic fields . as an example , the process for calculating is shown in fig . [fig : lx ] where each block was implemented with one or several cuda kernels .the process for calculating or is almost the same as except that the dimensional indexes are different ..,width=384 ] the first part of the calculation process is .the fluid variables will be updated along .* algorithm 1 * shows the steps and gpu kernels of this process ( the data of and are already copied to device memory ) , all the steps are processed on all grid points with cuda kernels in parallel .load , and memory allocation for the storage of the intermediate results : , , , , ( includes the storage of , , , etc ) results obtained by equation ( [ equat_b ] ) with , ( stored the magnetic field of the cell faces ) results obtained by equations ( [ eq : flux ] ) and ( [ eq : lr ] ) with the flux of a half time step : difference calculation ( first - order upwind scheme of fluid " cuda kernels in fig .[ fig : lx ] ) obtained by equation ( [ eq : delta_u_half ] ) using calculate the intermediate result ( ) using equation ( [ eq : delta_u_half ] ) with and results obtained by equations ( [ eq : flux ] ) and ( [ eq : lr ] ) with ( the same algorithm and same cuda kernels in step 4 ) the flux of another half time step : difference calculation ( second - order tvd scheme of fluid " cuda kernels in fig .[ fig : lx ] ) obtained by equation ( [ eq : delta_u_full ] ) and the limiter ( equation ( [ eq : limiter ] ) ) using calculate the result of with using equation ( [ eq : delta_u_full ] ) and save it back to free the storage of the intermediate results ( continue to the second part of , update the orthogonal magnetic fields ) in this process , we have to calculate the magnetic fields of the grid point ( equation [ equat_b ] ) first because all the magnetic fields are defined on the faces of the grid cell . to update the fluid variables of , the main process , which includes one or even several cuda kernels , is to calculate the affect of the orthogonal magnetic fields to the fluid variables of equations ( [ eq : flux ] ) , ( [ eq : lr ] ) and ( 10 ) .one such main process gives the flux of the step .after two main processes of flux calculation and the other difference calculations , the value of fluid is updated from to in one process .the second part of the calculation process is to update the orthogonal magnetic fields in -dimension ( ) , and -dimension ( ) with the fluid along -dimension .the strategy and implementation are similar to those in the first part but with a different algorithm for the orthogonal magnetic fields .( after the processes of fluid , we obtain an updated ) load ( density ) , ( ) , and memory allocation for the intermediate results : , , and determine the fluid speed with the updated and in , with the difference calculated in -dimension results obtained by equation ( [ eq : averaging_x ] ) the flux of a half time step : difference calculation of flux of magnetic field in -dimension " ( first - order upwind scheme of magnetic field " cuda kernels in fig .[ fig : lx ] ) obtained by equations ( [ eq : delta_u_half ] ) and ( [ eq : first_flux ] ) ) calculate the intermediate result ( ) by applying equation ( [ eq : delta_u_half ] ) to ( not by applying equation ( [ eq : delta_u_half ] ) to ) with and the flux of another half time step : difference calculation ( second - order tvd scheme of magnetic field " cuda kernels in fig . [ fig : lx ] ) obtained by equation ( [ eq : delta_u_half ] ) , the limiter of equation ( [ eq : limiter ] ) and equation ( [ eq : first_flux ] ) calculate the result of and with by applying equation ( [ eq : delta_u_full ] ) ) to , and save it back to ( the following steps is similar to above steps but the affected orthogonal magnetic field is changed from to ) determine the fluid speed with the updated and in , with the difference calculated in -dimension results obtained with equation ( [ eq : averaging_x ] ) using index of , , the flux of a half time step : difference calculation of flux of magnetic field in -dimension " ( first - order upwind scheme of magnetic flied " cuda kernels in fig .[ fig : lx ] ) obtained by equations ( [ eq : delta_u_half ] ) and ( [ eq : first_flux ] ) calculate the intermediate result ( ) by applying equation ( [ eq : delta_u_half ] ) to ( not by applying equation ( [ eq : delta_u_half ] ) to ) with and the flux of another half time step : difference calculation ( second - order tvd scheme of magnetic flied " cuda kernels in fig .[ fig : lx ] ) obtained by equation ( [ eq : delta_u_half ] ) , the limiter of equation ( [ eq : limiter ] ) and equation ( [ eq : first_flux ] ) calculate the results of and with by applying equation ( [ eq : delta_u_full ] ) ) to , and save it back to free the storage of the intermediate results in * algorithm 1 * , the calculations in steps ( 4 ) to ( 9 ) are the steps for , and steps ( 11 ) to ( 16 ) are the steps for .the steps for and are almost the same , and the only different parts are the dimensional indexes of the difference calculations , and the affected magnetic fields : and . after the first part of the fluid is updated to .this change of the fluid affects to the orthogonal magnetic fields .therefore , the corresponding change ( flux ) of orthogonal magnetic fields can be calculated with the density and velocity of the updated fluid .then the orthogonal magnetic fields are also updated to and , and also , these changes give effects to .after one process of , both fluid and magnetic fields are updated to with the affect of the flow in -dimension .and a sweeping operation sequence includes two , , and ( see equations ( [ eq : l_sequence ] ) ) .so we actually get the updated fluid and magnetic fields of after one sweeping operation sequence .note that the second in the sequence is a reverse sweeping operation , the order of , and has to be reversed : and first , and second .as we mentioned before , numerical solution of conservation laws needs lots of memory because there are many intermediate results generated during the computation process .these intermediate results should be stored for the next calculation steps which need the information of the neighboring grid points obtained in the previous calculation steps .otherwise , in order to avoid the asynchronous problem in parallel computation , we have to do many redundant processes .this is due to the processors on gpus will not automatically start or stop working synchronously . without storing the intermediate results , it will be hard to guarantee the values of the neighboring grid points updated synchronously . with the purpose to minimizing the memory usage , not only the calculation process of is divided into several steps ( cuda kernels ) , but also the intermediate results are stored as little as possible .the processes dealing with the difference calculations are also divided into several steps to minimize the storage of the intermediate results and to guarantee there is no wrong result caused by asynchronous problem .it should be realized that most of the processes in the three - dimensional relaxing tvd scheme with the dimensional splitting technique is similar .et al . _ swapped the data of , , and -dimensions while _ gpu - mhd _ used one - dimensional arrays .but the similar swapping technique can be applied in our case with some indexing operations . instead of transposing or swapping the data, we implemented each calculation part of the flux computation with two sets of cuda kernels : one set is the cuda kernels for calculating the relaxing tvd scheme ( we call it tvd kernel here ) and the other set is the cuda kernels actually called by operations ( we call them kernels here ) .indexing operations are contained in all kernels .after the index is calculated , tvd kernels are called and the indexes are passed to the tvd kernels , letting the tvd kernels calculate the flux of corresponding dimension . therefore , the difference among , , and is the dimensional index .the flux computation of _ gpu - mhd _ is shown in fig .[ fig : indexingkernels ] . the indexing operation swaps the target that will be updated and the neighboring relationship will also be changed accordingly . for example , the calculation that uses as the neighboring element in be changed to in .as transposing the data in a matrix needs more processing time , it is efficient and flexible to extend the code to multidimensional by dividing the indexing operation and flux calculation . as we mentioned in section 4.1 ,the data is stored in 1d array , the data accesses of , , and are depicted in fig .[ fig : dataaccess ] . in ,the data of are used to calculate and update the data of .the data of are used to calculate and update the data of , and so on .similarly , in , the data of are used to calculate and update the data of .the data of are used to calculate and update the data of , and so on . in ,the data of are used to calculate and update the data of .the data of are used to calculate and update the data of , and so on .it seems that the data accesses of and will slow down the performance since these accesses are not in so called coalescing " pattern .however , experimental results show that the computational times spending on calculating each dimensional component such as and in , , and are very close in our current arrangement ( see tables [ table_1d_part ] [ table_2d_part ] , and [ table_3d_part ] in section 7 ) .this is due to the fact that gt200 and fermi gpu are more flexible to handle the data access that is not perfectly coalesced ( see section 4.1 ) .thus we did not further perform the coalescing to make these data accesses in optimal coalescing pattern . , and ,width=480 ] after the whole pipeline of fig .[ fig : flowchart ] is completed , the mhd simulation results will be stored in gram and these results are readily to be further processed by the gpu for visualization or read back to the cpu for other usage . due to the data - parallel nature of the algorithm and its high arithmetic intensity, we can expect our gpu implementation will exhibit a relatively good performance on gpus .in this section , several numerical tests in one - dimensional ( 1d ) , two - dimensional ( 2d ) , and three - dimensional ( 3d ) for validation of _ gpu - mhd _ are given .two graphics cards nvidia geforce gtx 295 and gtx 480 were used .gtx 295 has two gpus inside but only one was used in these numerical tests .the results shown in this section are computed with single precision mode in _ gpu - mhd _ on gtx 295 .the difference between single precision and double precision computation results will be discussed in section [ sec : accuracy ] .1d brio - wu shock tube problem which is a mhd version of the sod problem , consists of a shock tube with two initial equilibrium states as follows left side right side constant value of was used and the problem was solved for ] with 512 grid points .numerical results are presented at in fig .[ fig : rj952a ] and fig .[ fig : rj952a2 ] , which include the density , the pressure , the energy , the - and -magnetic field components , and the - , - and -velocity components .the results are in agreement with those obtained by and . .the result computed with 512 grid points is shown with circles and solid line shows reference high resolution result of 4096 grid points.,title="fig:",width=240 ] .the result computed with 512 grid points is shown with circles and solid line shows reference high resolution result of 4096 grid points.,title="fig:",width=240 ] .the result computed with 512 grid points is shown with circles and solid line shows reference high resolution result of 4096 grid points.,title="fig:",width=240 ] .the result computed with 512 grid points is shown with circles and solid line shows reference high resolution result of 4096 grid points.,title="fig:",width=240 ] hfill .the result computed with 512 grid points is shown with circles and solid line shows reference high resolution result of 4096 grid points.,title="fig:",width=240 ] .the result computed with 512 grid points is shown with circles and solid line shows reference high resolution result of 4096 grid points.,title="fig:",width=240 ] .the result computed with 512 grid points is shown with circles and solid line shows reference high resolution result of 4096 grid points.,title="fig:",width=240 ] .the result computed with 512 grid points is shown with circles and solid line shows reference high resolution result of 4096 grid points.,title="fig:",width=240 ] the first 2d test is orszag - tang problem , which is used to study incompressible mhd turbulence . in our test ,the boundary conditions are periodic everywhere .the density , pressure , initial velocities , and magnetic field are given by the orszag - tang vertex test was performed in a two - dimensional periodic box with 512 512 grid points .the results of the density and gas pressure evolution of the orszag - tang problem at and are shown in fig .[ fig : ot ] , where the complex pattern of interacting waves is perfectly recovered .the results agree well with those in lee _et al . _ .( left ) and ( right ) computed with 512 512 grid points.,title="fig:",width=240 ] ( left ) and ( right ) computed with 512 512 grid points.,title="fig:",width=240 ] ( left ) and ( right ) computed with 512 512 grid points.,title="fig:",width=240 ] ( left ) and ( right ) computed with 512 512 grid points.,title="fig:",width=240 ] the second 2d test is the mhd blast wave problem .the mhd spherical blast wave problem of zachary _ et al . _ is initiated by an over pressured region in the center of the domain .the result is a strong outward moving spherical shock with rarified fluid inside the sphere .we followed the test suite of athena .the condition for 2d mhd blast wave problem is listed as follows in fig .[ fig:2dblast ] , we present images of the density and gas pressure at computed with 512 512 grid points .the results are in excellent agreement with those presented in . , computed with 512 512 grid points.,title="fig:",width=240 ] ,computed with 512 512 grid points.,title="fig:",width=240 ] the third 2d test is the mhd rotor problem .the problem was taken from .it initiates a high density rotating disk with radius of fluid measured from the center point .the ambient fluid outside of the spherical region of has low density and , and the fluid between the high density disk fluid and ambient fluid ( where ) has linear density and angular speed profile with and where .two initial value sets of and provided in and were tested . the initial condition for 2d mhd rotor problemis listed as follows first rotor problem : second rotor problem : in fig .[ fig:2drotor ] , we present images of the density , gas pressure of the two rotor problems computed with 512 512 grid points .the results are in excellent agreement with those presented in and . , results of the density ( bottom - left ) , gas pressure ( bottom - right ) of the second mhd rotor test at , both computed with 512 512 grid points.,title="fig:",width=240 ] , results of the density ( bottom - left ) , gas pressure ( bottom - right ) of the second mhd rotor test at , both computed with 512 512 grid points.,title="fig:",width=240 ] , results of the density ( bottom - left ) , gas pressure ( bottom - right ) of the second mhd rotor test at , both computed with 512 512 grid points.,title="fig:",width=240 ] , results of the density ( bottom - left ) , gas pressure ( bottom - right ) of the second mhd rotor test at , both computed with 512 512 grid points.,title="fig:",width=240 ] the 3d version of mhd spherical blast wave problem was also tested .the condition is listed as follows fig .[ fig:3dblast01 ] and fig .[ fig:3dblast02 ] show the results of 3d blast wave problem , which include the density , gas pressure , and magnetic pressure at and sliced along - plane at = 0.5 .the test was computed with 128 128 128 grid points .due to the scarcity of published 3d test results , we do not make direct contact with results presented in the literature here .considering only the and , the memory requirement of mhd problem is about 512 mb gram for single precision and 1024 mb gram for double precision , respectively . if the storage of intermediate results such as , , and etc .( see section [ sec : sweeping ] ) are considered , the amount of memory requirement will be about 2.25 gb ( single precision ) .as we mentioned in section [ sec : sweeping ] , not all the capacity of gram can be used to store data arbitrarily .as we said in the beginning of this section , there are actually two gpus inside the gtx 295 and the 1.75 gb gram is the total amount of the gram shared by two gpus , so that only less than gb gram can be used . as a result , the test of 3d problem with resolution are not able to be provided on a graphics card . sliced along - plane at = 0.5 and computed with 128 128 128 grid points.,title="fig:",width=240 ] sliced along - plane at = 0.5 and computed with 128 128 128 grid points.,title="fig:",width=240 ] sliced along - plane at = 0.5 and computed with 128 128 128 grid points.,title="fig:",width=240 ] sliced along - plane at = 0.5 and computed with 128 128 128 grid points.,title="fig:",width=240 ] sliced along - plane at = 0.5 and computed with 128 128 128 grid points.,title="fig:",width=240 ] sliced along - plane at = 0.5 and computed with 128 128 128 grid points.,title="fig:",width=240 ]in mhd simulations , accuracy is always to be considered since the error may increase fast and crash the simulation if low precision is used for computation .scientific computations such as mhd simulation mostly use double precision to reduce errors . in this section , the results generated by _gpu - mhd _ using single precision and double precision modes are shown and compared .the difference between the results of double precision and single precision computation of the one - dimensional brio - wu shock tube problem is shown in fig . [fig : briowu_diff ] .two curves are almost the same but there are actually some differences with the amount of : ( top ) , ( middle ) and ( bottom ) of 1d brio - wu shock tube problem at with 512 grid points , title="fig:",width=240 ] ( top ) , ( middle ) and ( bottom ) of 1d brio - wu shock tube problem at with 512 grid points , title="fig:",width=240 ] ( top ) , ( middle ) and ( bottom ) of 1d brio - wu shock tube problem at with 512 grid points , title="fig:",width=240 ] ( top ) , ( middle ) and ( bottom ) of 1d brio - wu shock tube problem at with 512 grid points , title="fig:",width=240 ] ( top ) , ( middle ) and ( bottom ) of 1d brio - wu shock tube problem at with 512 grid points , title="fig:",width=240 ] ( top ) , ( middle ) and ( bottom ) of 1d brio - wu shock tube problem at with 512 grid points , title="fig:",width=240 ] in 2d cases , the absolute difference between the results of double precision and single precision computation of mhd rotor test ( ) and orszag - tang vortex test ( and ) are shown in fig .[ fig : rotor_diff ] and fig [ fig : ot_diff ] , respectively .the double precision computation results of both tests are also shown in the left - hand side of these figures .for the mhd rotor test , even the resulting image ( left in fig .[ fig : rotor_diff ] ) looks similar to the single precision resulting image ( top - left of fig .[ fig:2drotor ] ) , the high differences at the dense region can be found .experimental result shows that the maximum is larger than .( left ) and ( right ) of mhd rotor problem at with grid points , title="fig:",width=240 ] ( left ) and ( right ) of mhd rotor problem at with grid points , title="fig:",width=240 ] fig .[ fig : ot_diff ] shows the absolute difference between the results of double precision and single precision computation of orszag - tang test at and . as the simulation time increases , the maximum increases from about to .( left ) and ( right ) of orszag - tang problem at ( top ) and ( bottom ) with grid points , title="fig:",width=240 ] ( left ) and ( right ) of orszag - tang problem at ( top ) and ( bottom ) with grid points , title="fig:",width=240 ] ( left ) and ( right ) of orszag - tang problem at ( top ) and ( bottom ) with grid points , title="fig:",width=240 ] ( left ) and ( right ) of orszag - tang problem at ( top ) and ( bottom ) with grid points , title="fig:",width=240 ] fig .[ fig:3dblast1_diff ] and fig .[ fig:3dblast2_diff ] show the resulting images of the simulation using double precision and the contours of the absolute differences between the results of double precision and single precision computation of 3d blast wave test with grid points at and . as it is a high dimension computation in low resolution ,the differences between them are clear .the number of grid points having higher difference value increases , and the is still less than .small difference value makes the double precision resulting images ( fig .[ fig:3dblast1_diff ] and fig . [ fig:3dblast2_diff ] ) looked similar to the single precision resulting images ( fig .[ fig:3dblast01 ] and fig .[ fig:3dblast02 ] ) .( top - left ) , ( bottom - left ) and ( top - right ) , ( bottom - right ) of 3d blast wave problem at with grid points , title="fig:",width=240 ] ( top - left ) , ( bottom - left ) and ( top - right ) , ( bottom - right ) of 3d blast wave problem at with grid points , title="fig:",width=240 ] ( top - left ) , ( bottom - left ) and ( top - right ) , ( bottom - right ) of 3d blast wave problem at with grid points , title="fig:",width=240 ] ( top - left ) , ( bottom - left ) and ( top - right ) , ( bottom - right ) of 3d blast wave problem at with grid points , title="fig:",width=240 ] ( top - left ) , ( bottom - left ) and ( top - right ) , ( bottom - right ) of 3d blast wave problem at with grid points , title="fig:",width=240 ] ( top - left ) , ( bottom - left ) and ( top - right ) , ( bottom - right ) of 3d blast wave problem at with grid points , title="fig:",width=259 ] ( top - left ) , ( bottom - left ) and ( top - right ) , ( bottom - right ) of 3d blast wave problem at with grid points , title="fig:",width=240 ] ( top - left ) , ( bottom - left ) and ( top - right ) , ( bottom - right ) of 3d blast wave problem at with grid points , title="fig:",width=259 ] an important point can be realized that not only the grid points at the high density region has high difference value , but also the number of grid points having high difference value and the amount of the difference values are increasing along with the increase of the simulation time .higher dimension is another factor to introduce noticeable differences between the computation results with different precisions because higher dimension means a grid point has more neighbors and more neighbors need more computation steps in one time step . as a resultthe differences become more obvious . therefore , for a long - term simulation , double precision computation is a must .the original fortran code is a second - order accurate high - resolution tvd mhd code .theoretically , we consider that _ gpu - mhd _ is sufficient to capture forward and reverse shocks as well as any other discontinuities such as contact discontinuities which are important in space physics . as _gpu - mhd _ is a dimensional - splitting based code , there are two drawbacks : ( i ) the code is unable to evolve the normal ( to the sweep direction ) magnetic field during each sweep direction , and ( ii ) splitting errors will generally be introduced due to the linearized jacobian flux matrices do not commute in most of the nonlinear multidimensional problems .the performance measurements of the gpu and cpu implementations as well as the computation using double precision and single precision are carried out in this section .different numbers of grid points and different dimensions were used in the performance tests .we run both _gpu - mhd _ and pen _ et al . _s fortran / cpu mhd code to perform the simulations on a pc with intel core i7 965 3.20 ghz cpu , 6 g main memory , running microsoft windows xp 64-bit professional .two graphics cards were tested : nvidia geforce gtx 295 with 1.75 g video memory and gtx 480 ( fermi ) with 1.5 g video memory . the fortran compiler and gpu development toolkit we used are g95 stable version 0.92 and nvidia cuda 3.2 , respectively ._ gpu - mhd _ was designed for three - dimensional problems , thus the dimensions are expressed in three - dimensional form in all figures . for 1d test , 1d brio - wu shock tube problem ( see section [ sub : brio - wu ] ) was used . for 2d test, 2d orszag - tang problem ( see section [ sub : o - t ] ) was used was used . for 3d test ,3d blast wave problem ( see section [ sub:3dblast ] ) was used .[ fig : speedup1d ] reports the comparison of _ gpu - mhd _ and the fortran / cpu code of 1d test with different numbers of grid points in single and double precisions . in single precision mode, basically there is only about 10 times speedup ( case ) since the number of grid points is small . and it should be realized that the amount of speedup is increased as long as the resolution is increased but dropped when the resolution reaches .it is because the `` max threads per block '' of gtx 295 is 512 , all the computations are handled within one block and a very high processing speed can be archived .on gtx 480 , there is about 80 times speedup ( case ) and the amount of speedup is increased linearly due to the higher floating point capability ( 512 mad ops / clock ) . in double precision mode ,around 10 times and 60 times speedup ( case ) is achieved on gtx 295 and gtx 480 , respectively .table [ table_1d_precision ] gives the comparison of _ gpu - mhd _ using single precision and double precision of 1d test with different numbers of grid points .on gtx 295 , a similar speed drop happened both in single and double precision modes , but it occurred in different resolutions : in single precision and in double precision .this is not strange and not difficult to understand since the double precision has double size of data to be handled by the processors .except the special case of resolution , the processing speed in both modes are very closed .on gtx 480 , the performance between single and double precision is quite close ..the performance results of 1d test between single precision and double precision of _ gpu - mhd _ at different resolutions [ cols="^,^,^,^,^ " , ]there is a need to visualize the mhd simulation data , for examples , daum developed a toolbox called _ visan mhd _ in matlab for mhd simulation data visualization and analysis . with the help of gpus , stantchev _ et al . _ used gpus for computation and visualization of plasma turbulence . in _ gpu - mhd_ , using the parallel computation power of gpus and cuda , the simulation results of one time step can be computed in dozens or hundreds milliseconds .according to the efficiency of _ gpu - mhd _ , near real - time visualization is able to be provided for 1d and 2d problems .the motion or attributes of the magnetic fluid can be computed and rendered on the fly .so the changes of the magnetic fluid during the simulation can be observed in real - time . by adding the real - time visualization , the flow of _ gpu - mhd _ , fig .[ fig : flowchart ] is extended as fig .[ fig : flowchart_v ] : _ gpu - mhd _ provides different visualization methods for one - dimensional , two - dimensional and three - dimensional problems . to visualize one - dimensional problems for each time step, the simulation results are copied to the cuda global memory that mapped to the vertex buffer object ( vbo ) .for all grid points , one grid point is mapped to one vertex .the position of each grid point is mapped as the -position of the vertex and the selected physical value ( , , etc . )is mapped as the -position of the vertex .then a curve of these vertices is drawn . since the vbo is mapped to cuda global memory and simulation results are stored in gram , the copying and mapping operations are fast .experimental result shows that _ gpu - mhd _ with real - time visualization can achieve 60 frame per second ( fps ) in single precision mode and 30 fps in double precision mode on gtx 295 .on gtx 480 , around 60 fps in both single and double precisions is achieved .[ fig:1dvis ] shows two example images of 1d visualizations using _ gpu - mhd_. ) of brio - wu shock tube problem with grid points using _ gpu - mhd_.,title="fig:",width=240 ] ) of brio - wu shock tube problem with grid points using _gpu - mhd_.,title="fig:",width=240 ] the operational flow of visualization of 2d problems is similar to that in 1d visualization .however , instead of vertex buffer object ( vbo ) , pixel buffer object(pbo ) is used . for each time step ,the simulation results are copied to the cuda global memory that are then mapped to the pbo .for all grid points , one grid point is mapped to one pixel .the and position of each grid point are mapped as the corresponding -position and the -position of the vertex and the selected physical value ( , , etc . )is mapped as the color of the pixel to form a color image . to render this color image ,a transfer function is set to map the physical value to the color of the pixel and then the resulting image is drawn .similar to vbo , pbo is also mapped to cuda global memory and the simulation results are stored in gram , so the copying and mapping operations are also fast and do not affect too much to the performance . although the number of grid points in 2d problem is much larger than those in the one - dimension problem , the fps still reaches 10 in single precision mode and 6 in double precision mode on gtx 295 when the number of grid points is , still giving acceptable performance to the user .on gtx 480 , 22 fps in single precision and 17 fps in double precision are achieved and thus interactive rates are available .[ fig:2dvis ] shows two example images of 2d visualizations using _ gpu - mhd_. ) of orszag - tang vortex problem with grid points using _ gpu - mhd_.,title="fig:",width=240 ] ) of orszag - tang vortex problem with grid points using _ gpu - mhd_.,title="fig:",width=240 ] however , visualization of 3d problem is different to 1d and 2d problems .gpu - based volume visualization method and texture memory ( or video memory ) are used .unfortunately , the current version ( version 2.3 ) of cuda does not provide the feature to copy the data from the cuda global memory to texture memory directly , even both of them are in gram . on the other hand ,texture memory is readable but is not rewritable in cuda .so the simulation results have to be copied to the main memory first , and then be copied to texture memory .in addition , the number of grid points is usually large compared to 2d problems and volume visualization techniques are somewhat time - consuming . as a result , on gtx 295 ,_ gpu - mhd _ only gets 2 fps in single precision mode and 1 fps in double precision mode when the number of grid points is , and it is far from real - time .nevertheless , we still get 10 fps ( single precision mode ) and 6 fps ( double precision mode ) for performing the simulation of problems with resolution of and about 20 fps ( single and double precision modes ) for problems with resolution of . on gtx 480, we can get 60 fps for both single and double precision for grid points , 20 fps ( single ) and 16 fps ( double ) for grid points , and 6.1 fps ( single ) and 3.6 fps ( double ) for grid points .[ fig:3dvis ] shows two example images of 2d visualizations using _ gpu - mhd_. ) of 3d blast wave problem with grid points using _gpu - mhd_.,title="fig:",width=240 ] ) of 3d blast wave problem with grid points using _ gpu - mhd_.,title="fig:",width=240 ]in this paper we present , to the author s knowledge , the first implementation of mhd simulations entirely on gpus with cuda , named _ gpu - mhd _ , to accelerate the simulation process .the aim of this paper is to present a gpu implementation in detail , demonstrating how a tvd based mhd simulations can be implemented efficiently for nvidia gpus with cuda . a series of numerical tests have been performed to validate the correctness of our code .accuracy evaluation by comparing single and double precision computation results is also given , indicating that double precision support on gpus is a must for long - term mhd simulation .performance measurements of both single and double precision modes of _ gpu - mhd _ are conducted on gt200 architecture ( gtx 295 ) and fermi architecture ( gtx 480 ) .these measurements show that our gpu - based implementation achieves between one and two orders of magnitude depending on the used graphics card , problem size , and precision when comparing to the original serial cpu mhd implementation . in order to provide the user better understanding of the problems being investigated during the simulation process, we have extended _ gpu - mhd _ to support visualization of the simulation results . with _gpu - mhd _ , the whole mhd simulation and visualization process can be performed entirely on gpus .there are two directions in our future work , firstly , we are going to extend _ gpu - mhd _ for multiple gpus and gpu cluster to fully exploit the power of gpus .secondly , we will investigate implementing other recent high - order godunov mhd algorithms such as and on gpus .these gpu - based algorithms will be served as the base of our gpu framework for simulating large - scale mhd problems in space weather modeling .this work has been supported by the science and technology development fund of macao sar ( 03/2008/a1 ) and the national high - technology research and development program of china ( 2009aa122205 ) .xueshang feng is supported by the national natural science foundation of china ( 40874091 and 40890162 ) .the authors would like to thank dr .ue - li pen and bijia pang at the canadian institute for theoretical astrophysics , university of toronto for providing the fortran mhd code .thanks to dr .yuet ming lam for his suggestions on the revision of the paper .special thanks to anonymous reviewers for their constructive comments on the paper .d. s. balsara , d. s. spicer , a staggered mesh algorithm using high order godunov fluxes to ensure solenoidal magnetic fields in magnetohydrodynamic simulations , _ j. comput. phys . _ * 149 * ( 1999 ) 270 - 292 .r. g. belleman , j. bdorf , s. f. portegies zwart , high performance direct gravitational n - body simulations on graphics processing units ii : an implementation in cuda , _ new astronomy _ * 13 * ( 2008 ) 103 - 112 .s. che , m. boyer , j. meng , d. tarjan , j. w. sheaffer , k. skadron , a performance study of general - purpose applications on graphics processors using cuda , _ j. parallel distrib .* 68 * ( 2008 ) 1370 - 1380 .a. ciardi , s. lebedev , a. frank , e. blackman , d. ampleford , c. jennings , j. chittenden , t. lery , s. bland , s. bott , g. hall , j. rapley , f. vidal , a. marocchino , 3d mhd simulations of laboratory plasma jets , _ astrophysics and space science _ * 307 * ( 2007 ) 17 - 22 .m. dryer , space weather simulations in 3d mhd from the sun to earth and beyond to 100 au : a modeler s perspective of the present state of the art ( invited review ) , _ asia j. of physics _* 16 * ( 2007 ) 97 - 121 .j. c. hayes , m. l. norman , r. a. fiedler , j. o. bordner , p. s. li , s. e. clark , a. ud - doula , m .-m. low , simulating radiating and magnetized flows in multiple dimensions with zeus - mp , _ astrophys .j. supp . _* 165 * ( 2006 ) 188 - 228 .d. lee , a. e. deane , a parallel unsplit staggered mesh algorithm for magnetohydrodynamics , in _parallel computational fluid dynamics theory and applications _, edited by a. deane _ et al ._ , elsevier ( 2006 ) 243 - 250 .p. mininni , e. lee , a. norton , j. clyne , flow visualization and field line advection in computational fluid dynamics : application to magnetic fields and turbulent flows , _ new j. phys ._ * 10 * ( 2008 ) 125007 .l. seiler , d. carmean , e. sprangle , t. forsyth , m. abrash , p. dubey , s. junkins , a. lake , j. sugerman , r. cavin , r. espasa , e. grochowski , t. juan , p. hanrahan , larrabee : a many - core x86 architecture for visual computing , _ acm trans ._ * 27 * ( 2008 ) article 18 .g. stantchev , d. juba , w. dorland , a. varshney , using graphics processors for high - performance computation and visualization of plasma turbulence , _ computing in science and engineering _ * 11 * ( 2009 ) 52 - 59 .g. tth , d. odstr , comparison of some flux corrected transport and total variation diminishing numerical schemes for hydrodynamic and magnetohydrodynamic problems , _j. comput .* 128 * ( 1996 ) 82 - 100 . | magnetohydrodynamic ( mhd ) simulations based on the ideal mhd equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory , astrophysical , and space plasmas . in general , high - resolution methods for solving the ideal mhd equations are computationally expensive and beowulf clusters or even supercomputers are often used to run the codes that implemented these methods . with the advent of the compute unified device architecture ( cuda ) , modern graphics processing units ( gpus ) provide an alternative approach to parallel computing for scientific simulations . in this paper we present , to the author s knowledge , the first implementation of mhd simulations entirely on gpus with cuda , named _ gpu - mhd _ , to accelerate the simulation process . _ gpu - mhd _ supports both single and double precision computation . a series of numerical tests have been performed to validate the correctness of our code . accuracy evaluation by comparing single and double precision computation results is also given . performance measurements of both single and double precision are conducted on both the nvidia geforce gtx 295 ( gt200 architecture ) and gtx 480 ( fermi architecture ) graphics cards . these measurements show that our gpu - based implementation achieves between one and two orders of magnitude depending on the used graphics card , problem size , and precision when comparing to the original serial cpu mhd implementation . in addition , we extend _ gpu - mhd _ to support the visualization of the simulation results and thus the whole mhd simulation and visualization process can be performed entirely on gpus . mhd simulations , gpus , cuda , parallel computing |
in it was argued that the transport barrier near the core of the austral polar night jet can be explained by a mechanism different from the potential vorticity ( pv ) barrier mechanism . the new barrier mechanism , which was subsequently referred to as `` strong kam stability '' , follows from an argument that does not make use of dynamical constraints on the streamfunction .this necessitates that dynamical constraints be considered separately .interestingly , decoupling of the dynamical constraints from the barrier mechanism leads to the possibility that transport barriers in pv - conserving flows may occur at locations that do not coincide with pv - barriers . predicted that barriers of this type should be present in close proximity to the cores of westward zonal jets in planetary atmospheres . in this paperwe demonstrate that transport barriers of this type are present in a numerically simulated pv - conserving flow .we also argue that barriers of the type described are present in jupiter s midlatitude weather layer and in the earth s summer hemisphere subtropical stratosphere . in the following section passive tracer transport in a numerically simulated perturbed pv - staircase flow is investigated .it is shown that robust meridional transport barriers in close proximity to the cores of both eastward and westward zonal jets are present .the surprise is that westward jets , at which the background pv - gradient vanishes , act as transport barriers .essential elements of the strong kam stability argument are reviewed to explain this behavior . in section 3we discuss the relevance of transport barriers of the strong kam stability type to : 1 ) the interpretation of jupiter s midlatitude weather layer belt - zone structure ; and 2 ) the earth s summer hemisphere subtropical stratosphere . in the final sectionwe briefly discuss our results .in this section we consider passive tracer transport in a perturbed pv - staircase flow .we assume quasigeostrophic dynamics in a one - layer reduced - gravity setting and make use of a local cartesian coordinate system where and increase to the east and north , respectively , and the constant is the local -derivative of the coriolis parameter .the zonal and meridional components of the velocity field are and , respectively , where is the streamfunction .the flow is constrained to satisfy where is the quasigeostrophic potential vorticity and is the deformation radius .recent theoretical , numerical and experimental work , including extensions involving spherical geometry , shallow - water dynamics and inclusion of weak forcing and dissipation , has shown that flows satisfying ( [ qcons])([qdef ] ) with periodic boundary conditions in tend to evolve toward a state of the form where is a small perturbation to . the background zonal flow is characterized by an approximately piecewise constant pv distribution that has been appropriately described as a pv - staircase .the corresponding zonal velocity profile is periodic in . taking the period to be , the jump in at each step is , and the zonal flow is the periodic extension of consisting of a periodic sequence of alternating narrow eastward and broad westward zonal jets with piecewise constant between adjacent eastward jets , as illustrated in fig .note that at the center of each constant band lies a westward jet . in the limit ( [ u_0 - 1 ] ) reduces to whose qualitative features are identical to those of the finite case . the flows ( [ u_0 - 1 ] ) and ( [ u_0 - 2 ] ) are normalized so that the integral of over vanishes .the rhines scale , where is a characteristic velocity , is an approximate measure of the separation between adjacent eastward jets .corresponding to ( [ u_0 - 2 ] ) this separation is exactly where is the wind speed at the core of one of the eastward jets . before presenting numerical simulations of passive tracer transport in a perturbed pv - staircase flowwe describe predictions based on two different arguments of the expected locations of transport barriers in this flow .first , the pv - barrier argument leads to the expectation that transport barriers should be present only near the cores of the eastward jets .the basic elements of the pv - barrier argument were used originally by to explain the mechanism by which the eastward jet at the perimeter of the austral stratospheric polar vortex , sometimes referred to as the austral polar night jet , during the late winter and early spring serves to trap ozone - depleted air inside the polar vortex .the essential elements of the argument are that at eastward jets the large gradient of is associated with a large rossby wave restoring force ( `` rossby wave elasticity '' ) which inhibits meridional exchange of fluid at larger scales and that shear acts to inhibit meridional exchange at smaller scales . ( but note that in it is argued that increasing meridional shear acts , on average , to increase meridional exchange . ) an alternative argument , based on the strong kam stability mechanism , leads to the expectation that transport barriers should be present near the cores of both eastward and westward jets in a pv - staircase flow .the argument leading to this expectation will now be reviewed .the lagrangian ( particle trajectory ) equations of motion , constitute a nonautonomous one - degree - of - freedom hamiltonian system , with the canonically conjugate coordinate momentum pair and the hamiltonian .this allows results from studies of integrable and nonintegrable hamiltonian systems to be applied . in the background steady flow , with , equations ( [ lagrange ] )are integrable and the motion is describable using a transformed hamiltonian where are action angle variables .each trajectory lies on a torus which is labeled by its -value .motion is -periodic in with angular frequency . according to each of many variants of the kolmogorov arnold moser ( kam ) theorem , many of the unperturbed tori survive in the perturbed system ( [ psi ] ) , albeit in a slightly distorted form , provided certain conditions are met . surviving tori can not be traversed and serve as transport barriers .( for reasons described in , the process known as arnold diffusion does not occur in the systems under study . )torus destruction is caused by the excitation and overlapping of resonances .each resonance has a characteristic width .nondegenerate resonance widths are proportional to .degenerate resonance widths do not vanish but are generally narrower than nondegenerate resonance widths .( quantitative estimates of both degenerate and nondegenerate resonance widths are given in ; for our purposes it suffices to note the general trend . ) for most moderate strength perturbations to the background , small resonance widths near degenerate tori lead to nonoverlapping resonances and thus unbroken tori that serve as transport barriers .this constitutes the strong kam stability barrier argument . in our model ( [ psi ] )the connection between and is particularly simple : , where is the distance around the planet along a constant latitude circle at the latitude at which is defined .the period of motion is , so . at the cores of both eastward and westward jets , so at these locations andthe strong kam stability argument predicts that robust meridional transport barriers should be present .barriers of this type may be broken if the transient perturbation strongly excites a low - order resonance with the frequency of the background flow near the core of the jet .three final issues are noteworthy .first , because the strong kam stability argument is a kinematic argument ( based on ( [ lagrange ] ) alone ) , dynamical consistency consistency between ( [ qcons])([psi ] ) and ( [ u_0 - 1 ] ) or ( [ u_0 - 2])and flow stability must be considered separately .second , our emphasis on jets is unnecessarily restrictive inasmuch as the strong kam stability argument holds at all locations where .third , the stated results on kam theory assume that can be expressed as a multiperiodic ( generically quasiperiodic ) function of .we now describe a set of numerical experiments that were performed to investigate passive tracer transport in a perturbed pv - staircase flow .the streamfunctions on which our tracer transport simulations are based were constructed by numerically solving ( [ qcons])([qdef ] ) using as an initial state a perturbation to the background pv - staircase ( [ u_0 - 1 ] ) . before presenting the results , it is appropriate to make two comments about what we expect to learn from these simulations .first , these simulations provide a test of the assertion that the decomposition ( [ qcons])([u_0 - 1 ] ) is dynamically consistent .second , given a positive outcome of the first test , these simulations test whether westward zonal jets , across which there is no pv - barrier , serve as robust meridional transport barriers .the quasigeostrophic equation ( [ qcons])([qdef ] ) was solved numerically using a standard pseudospectral technique on a grid in the computational domain .periodic boundary conditions in were applied in both the and directions . the arakawa representation of the advective terms , which are often written as a jacobian ,the solution was marched forward in time using a second - order adams bashforth scheme with a dimensionless timestep . to control the spurious amplification of high wavenumber modes, we applied a weak exponential cutoff filter and included a small amount of hyperviscosity .two types of initial perturbation to the background pv - staircase were used in our simulations . the first type of perturbation consisted of a superposition of periodic displacements of pv contours with random phases uniformly distributed on .the second type of perturbation was a doubly periodic perturbation to the streamfunction consisting of a sum of a product of fourier modes with random phases . in all of our simulationsthe separation between adjacent eastward jets was taken to be 8000 km and km s was used .the simulations shown correspond to . in anticipation of our discussion of jupiter in the following section , these parameters were chosen to approximately reproduce conditions on jupiter .note , however , that in our simulations the period in is 32000 km , which is approximately one - tenth the midlatitude distance around jupiter on a line of constant latitude .many one - year model simulations were run .for some parameter values ten - year model simulations were performed . for the parameter values given ,the change in both energy and enstrophy throughout the duration of the simulations performed was less than , giving us confidence in the accuracy of the simulations .both types of initial perturbation gave similar results .figure [ uq ] shows plots of instantaneous , at year , zonally - averaged zonal velocity , zonally averaged potential vorticity and potential vorticity .comparison of figs .[ u0q0 ] and [ uq ] provides strong support for the dynamical consistency of the decomposition ( [ psi])([u_0 - 1 ] ) , but these plots provide no insight into whether transport barriers are present . to address the latter questionwe have used the year - long records of computed velocity fields to : 1 ) follow the evolution of distributions of passive tracers , which evolve according to ( [ lagrange ] ) ; and 2 ) compute finite - time lyapunov exponents ( ftles ; see , for example , * ? ? ? * ) .typical results are shown in fig .[ ftle ] . as shown in the figure , the initial positions of the passive tracers fell on the lines , which lie midway between the unperturbed eastward and westward jets .it is seen that after a one - year integration the regions between jets are well - mixed , but that there is no meridional transport across the cores of the zonal jets .it should be emphasized that in this and other simulations tracer particles spread meridionally to fill the domains shown in about two weeks . throughout the reminder of the one - year integration no additional meridionaltracer spreading occurs . with this in mind ,[ ftle ] clearly shows that both eastward and westward zonal jets act as meridional transport barriers , consistent with the strong kam stability argument .calculation of ftles provides an additional test of the correctness of the strong kam stability argument .lyapunov exponents are a measure of the rate of divergence of neighboring trajectories . according to the strong kam stability argument, the transport barriers at the cores of zonal jets coincide with generally thin bands of kam invariant tori on which lyapunov exponents are zero .finite time estimates of lyapunov exponents will not be identically zero on kam invariant tori , but these structures should be readily identifiable as thin bands of low ftle estimates .this is precisely what is seen in fig .[ ftle ] ; both westward and eastward jets are identifiable as thin bands of low ftle estimates , consistent with the strong kam stability argument .typical computed values of ftle , in units of , shown in fig .[ ftle ] are 2 at the cores of the westward jets , 7 at the cores of the eastward jets , and 17 in the well - mixed regions .we have performed many numerical experiments based on pv - staircase flows of the type described here .these simulations support the conclusion that both eastward zonal jets ( where is very large ) and westward zonal jets ( where is very small ) act as robust meridional transport barriers .this conclusion is not sensitive to the choice of parameter values or details of the initial perturbation .two general trends are noteworthy .first , for a small perturbation the width of the barrier region near westward jets is greater than the width of the barrier region near eastward jets .this behavior is consistent with the strong kam stability argument : is small over a larger -domain near westward jets than near eastward jets .second , as the perturbation strength increases , transport barriers near westward jets generally break before eastward jet barriers break . in our simulationsthe westward jet barriers broke when the initial rms meridional pv - contour displacement exceeded approximately , while the eastward jet barriers broke when the initial rms displacement was approximately twice this value .possible explanations for the somewhat more robust nature of the eastward jets are : 1 ) the pv - barrier mechanism and the strong kam stability barrier mechanism act in tandem to strengthen the barriers near eastward jets ; and 2 ) we have performed a linear theory rossby wave analysis of pv - staircase flows that reveals that rossby wave critical layers are precluded at the eastward jets ( see also * ? ? ? * ) , which suggest that the eastward jet barriers may be more robust .in the previous section it was demonstrated that transport barriers may exist in a pv - conserving flow at locations that do not coincide with pv - barriers . in this sectionwe discuss observational evidence that suggests the existence of transport barriers in the absence of pv - barriers .we consider two examples : 1 ) jupiter s weather layer ; and 2 ) the earth s stratosphere . in both cases ,conclusions drawn should be regarded as tentative inasmuch as we do not treat either system in enough depth to make a definitive statement . in spite of our incomplete treatment of these topics, we feel that it is important to point out that , consistent with the theoretical and numerical results presented in the previous section , there is observational evidence in planetary atmospheres that suggests the existence of transport barriers in the absence of pv - barriers that can be explained by the strong kam stability mechanism .both the examples considered are , in our view , sufficiently important that the connection between the observations discussed and the strong kam stability barrier mechanism is worthy of a much more thorough investigation .it is natural to focus on flows consisting of a sequence of alternating eastward and westward zonal jets because many of the arguments used in the previous section are then applicable . in particular , for this class of flows , meridional transport barriers of the strong kam stability type are predicted to occur at the latitudes where .furthermore , in alternating zonal jet flows one may anticipate that eastward and westward jets are associated with large and small background pv - gradients , respectively .note , however , that for the purpose of identifying transport barriers in the absence of pv - barriers it is not necessary that the background pv distribution in the flows considered be that of an idealized pv - staircase .this point is discussed in more detail below .the following simple scaling argument shows why an alternating zonal jet flow is well - developed in jupiter s weather layer but is only marginally identifiable in the earth s stratosphere .these systems are then discussed in turn .recall that in a pv - staircase flow the separation between adjacent eastward jets is .we shall assume that this estimate approximately holds for general midlatitude multiple zonal jet mean flow patterns .let denote planetary radius and planetary rotation rate .then and the number of eastward ( or westward ) jets one expects to observe in each hemisphere at midlatitudes ( whose extent in latitude is taken here to be half the equator to pole distance ) is approximately . in jupiters weather layer ( m s , km , h ) , in good agreement with fig .[ beltzone ] , as described below . in the earths stratosphere ( m s , km , h ) .thus , conditions for the formation of a multiple zonal jet mean flow pattern are only marginally satisfied in the earth s stratosphere . in the earths troposphere is smaller , suggesting more favorable conditions , but mountain ranges and thermal exchange processes between the atmosphere and irregularly shaped oceans and continents constitute significant hindrances to the formation of zonal flows .conditions are favorable in the earth s oceans ( m s , corresponding to ) but there the presence of lateral boundaries dictates that zonal jets be embedded in recirculation gyres .the most striking feature of jupiter s weather layer circulation is that it is organized in a sequence of alternating eastward and westward zonal jets whose meridional excursion is very small .figure [ beltzone ] shows zonally - averaged zonal wind speed on jupiter at the cloud top level as a function of latitude .regions with dark and light shading in this figure are referred to as belts and zones , respectively .belts and zones correspond to regions in which the background motion is cyclonic and anticyclonic , respectively .the boundaries between adjacent belts and zones coincide with the cores of the zonal jets . at these boundaries .belts and zones have different radiative transfer properties ( analyses are not limited to the visible band of the electromagnetic spectrum ) which is attributed to differences in chemical composition . assuming that the weather layer flow is approximately two dimensional and that chemical species are long - lived, one may infer from the observation that adjacent belts and zones have different chemical constituents that there is very little fluid exchange between adjacent belts and zones , implying that both eastward and westward zonal jets act as robust meridional transport barriers .figure [ uqq_y ] shows , , and in jupiter s weather layer .the same data are shown ( but are plotted differently ) in . the question of whether a pv - staircase is a useful approximate model of jupiter s weather layer has been considered by many authors . for our purposesthe answer to this question is not critical .our focus is on identifying transport barriers that can not be explained by the pv - barrier mechanism .figure [ uqq_y ] shows that while all eastward jets are associated with large meridional pv - gradients , most of the westward jets are associated with small meridional pv - gradients .( here and above we are using the term westward jet somewhat loosely to include minima of , even when at the minimum . ) in other words , most of the transport barriers near the cores of the westward jets can not be explained by the pv - barrier mechanism . but all of the belt - zone boundaries the apparent meridional transport barriers coincide with latitudes at which , consistent with the strong kam stability barrier mechanism . thus all of the apparent transport barriers can be explained by the strong kam stability barrier mechanism .note , however , that there is no apparent barrier on the equator , where .this is probably due to a combination of anomalous equatorial dynamics and the manner by which chemical constituents are pumped into the near - equatorial weather layer . some caveats relating to our interpretation of observations from jupitershould be emphasized , however .first , we have assumed that the weather layer flow is nearly two dimensional and horizontally nondivergent , being only weakly forced by convection . although these assumptions are generally accepted ( see , for example , * ? ? ?* ) , it should be noted that our explanation of the apparent transport barrier rests on their validity .a second assumption that we have made is that chemical species in jupiter s weather layer are long - lived .another possible explanation of the apparent transport barriers between adjacent belts and zones is that chemical species are short - lived , being continuously pumped into the weather layer by convective overturning .we can not rule out this possibility .our argument shows , however , that the apparent lack of fluid exchange between adjacent belts and zones _ can _ be explained using dynamical arguments .the simple scaling argument given above predicts that conditions for the formation of a stable alternating multiple jet zonal mean flow pattern are only marginally satisfied in the earth s stratosphere . in qualitative agreement with this prediction , in each hemisphere there is one readily identifiable eastward zonal jet and one westward jet , and the appearance of these jets is seasonal ( see , for example , * ? ? ?the stronger jets are the high latitude eastward polar night jets which appear in the winter hemisphere .( the austral polar night jet is particularly strong , persisting throughout most of the stratosphere during the austral fall , winter and spring . )the westward jets are present in the subtropics throughout most of the stratosphere during the summer months in each hemisphere .the eastward polar night jets , especially in the southern hemisphere , act as transport barriers and are associated with strong pv - gradients . because the focus in the present study is on identifying transport barriers in the absence of pv - barriers , these jets are not of interest here .in contrast , the westward subtropical jets in the summer hemisphere are very much of interest because these are not associated with a strong pv - gradient .the properties just described are illustrated in fig .[ era40 ] .that figure shows a 7-year ( 19921998 ) monthly average of zonally - averaged potential vorticity ( in color ) and zonal wind ( as contours ) on the 460 k isentropic surface ( which lies in the lower stratosphere ) based on the european center for medium - range weather forecasts ( ecmwf ) 40-year re - analysis ( era-40 ) product .both the eastward polar night jets and the westward subtropical jets are readily identifiable . as we have noted , the eastward winter hemisphere polar night jets are associated with strong meridional pv - gradients while the westward subtropical jets are associated with nearly homogenous pv - distributions .the westward subtropical jets have the dynamical properties that we seek zonal jets in the absence of strong pv - gradients .we note also that , consistent with arguments made originally by , rossby wave perturbations to the background in these regions are weak . from the standpoint of applicability of the strong kam stability argument, weak perturbations are advantageous .we turn our attention now to the question of whether the cores of these jets serve as robust meridional transport barriers .studies of stratospheric transport based on effective diffusivity have been carried out by and .the effective diffusivity is large ( small ) in regions where fluid is well ( poorly ) mixed .fluid in the vicinity of a transport barrier is poorly mixed ; these regions are thus characterized by a small effective diffusivity .both stratospheric effective diffusivity analyses reveal that the westward subtropical jet in the summer hemisphere coincides with a region of anomalously low effective diffusivity ; see plates 1 and 4 in and 1 through 4 in .this suggests that these westward jets act as meridional transport barriers .previous work by and had focused on this `` subtropical barrier . '' indeed , this barrier comprises a critical element of the `` tropical pipe '' model of stratospheric transport .observational evidence that suggests the presence of subtropical transport barriers is presented in , , , and . provides a recent review of stratospheric transport , including a discussion of subtropical transport barriers .the evidence that we have pointed out strongly suggests that : 1 ) the stratospheric subtropical barrier is a robust meridional transport barrier that coincides with the core of a westward jet ; 2 ) the associated meridional pv - gradient is very small so this barrier can not be explained by the pv - barrier mechanism ; and 3 ) because at this barrier , the barrier is predicted by the strong kam stability barrier mechanism . to test these tentative conclusions more rigorously , a study based on realistic synoptic winds that tracks both potential vorticity and tracer distributions should be conducted .in the first part of this paper we presented numerical simulations of passive tracer transport in a perturbed pv - staircase flow and showed that both eastward and westward jets in this flow act as meridional transport barriers .the surprise is that westward jets , where the background pv gradient vanishes , act as transport barriers . this behavior was explained as being a consequence of the strong kam stability barrier mechanism .we then briefly discussed the applicability of the strong kam stability mechanism to explaining observations of jupiter s weather layer and the earth s subtropical stratosphere . in both of these systemswestward jets are present that appear to act as robust meridional transport barriers in the absence of a background meridional pv - barrier .these barriers are predicted by the strong kam stability mechanism . in both casesthe evidence presented should be regarded as suggestive .more thorough investigations of both problems is recommended .the principal weakness of our explanation of the apparent lack of fluid exchange between adjacent belts and zones in jupiter s midlatitude weather layer is that we can not exclude the possibility that the observed chemical composition differences in adjacent belts and zones are caused by a combination of strong convective overturning and short - lived chemical species . in spite of this caveat ,it is important to emphasize that we have shown that maintenance of the apparent chemical composition differences between adjacent belts and zones _ can _ be explained using a dynamical argument ( as opposed to a chemistry - based argument ) in which the weather layer flow is only weakly convectively forced . the principal weaknesses in our discussion of the earth s subtropical stratospheric transport barrier were that all of the properties noted were based on averaged winds rather than synoptic winds and that tracer transport and potential vorticity distributions were not estimated in a way that was guaranteed to be self consistent .it should not be difficult to overcome these shortcomings using model - based synoptic winds .we thank t. dowling and r. morales - juberias for providing the data used to construct figs .[ beltzone ] and [ uqq_y ] .comments on the manuscript by anonymous reviewers , t. shepherd , t. zgkmen , and j. willemsen are sincerely appreciated .the ecmwf era-40 data used in this study were obtained from the ecmwf data server .support for this work was provided by the u.s .national science foundation under grants cmg0417425 and oce0648284 ., p. l. , y. h. yamazaki , s. r. lewis , p. d. williams , r. wordsworth , k. miki - yamazaki , j. sommeria , and h. didelle , 2007 : dynamics of convectively driven banded jets in the laboratory ., * 64 * , 40314052 .rypina , i. i. , m. g. brown , f. j. beron - vera , h. kocak , m. j. olascoaga , and i. a. udovydchenkov , 2007 : on the lagrangian dynamics of atmospheric zonal jets and the permeability of the stratospheric polar vortex . , * 64 * , | the connection between transport barriers and potential vorticity ( pv ) barriers in pv - conserving flows is investigated with a focus on zonal jets in planetary atmospheres . a perturbed pv - staircase model is used to illustrate important concepts . this flow consists of a sequence of narrow eastward and broad westward zonal jets with a staircase pv structure ; the pv - steps are at the latitudes of the cores of the eastward jets . numerically simulated solutions to the quasigeostrophic pv conservation equation in a perturbed pv - staircase flow are presented . these simulations reveal that both eastward and westward zonal jets serve as robust meridional transport barriers . the surprise is that westward jets , across which the background pv gradient vanishes , serve as robust transport barriers . a theoretical explanation of the underlying barrier mechanism is provided . it is argued that transport barriers near the cores of westward zonal jets , across which the background pv gradient is small , are found in jupiter s midlatitude weather layer and in the earth s summer hemisphere subtropical stratosphere . |
in spite of progress in the understanding in evolution ever since darwin(1869 ) , the speciation is not yet fully understood . in the recent book , maynard - smith and szathmary(1995 )wrote that _ we are not aware of any explicit model demonstrating the instability of a sexual continuum_. to discuss the problem of speciation , let us start from reviewing basic standpoints in evolution theory , although it might look too elementary here . *\(i ) existence of genotype and phenotype * \(ii ) fitness for reproduction is given as a function of the phenotype and the environment .the environment " can include interaction with other individuals .in other words , the reproduction rate of an individual is a function of its phenotype , and environment , i.e. , , environment ) . *\(iii ) only the genotype is transferred to the next generation ( weissman doctrine ) * \(iv ) there is flow only from genotype to phenotype ( the central dogma of the molecular biology ) .for example , through the developmental process the phenotype is determined depending on the genotype .now , the process is summarized as _ genotype development phenotype_. here we adopt these standard assumptions .( although the assumption ( iii ) may not be valid for some cases known as epigenetic inheritance , we accept the assumption here , since the relevance of epigenetic inheritance to evolution is still controversial , and the theory to be proposed is valid in the presence of epigenetic inheritance , but does not require it . ) in the standard evolutionary genetics , the assumption ( iv ) is further replaced by a stronger one , i.e. , ( iv ) `` phenotype is a _ single _ valued function of genotype '' .if this were always true , we could replace , environment ) in ( i ) by (genotype),environment ) and then we could discuss the evolutionary process in terms of the population dynamics only of genotypes ( and environment ) .this is the basic standpoint in population genetics . indeed , this reduction to genes is valid for gradual evolution .it is also supported by the following mathematical argument .the change of genotype is slower in time scale than that of phenotype .as is known , variables with slower time scale act as a control parameter to faster ones , if the time scale separation is large enough ( and if the dynamics in the fast time scale do not have such instability that leads to bifurcation ). still , explanation of the speciation , especially sympatric speciation , is not so easy following this standard evolutionary genetics .if slight genetic change leads to slight phenotype change , then individuals arising from mutation from the same genetic group differ only slightly according to this picture .then , these individuals compete each other for the same niche . unless the phenotype in concern is neutral , it is generally difficult that two ( or more ) groups coexist. those with a higher fitness would survive .one possible way to get out of this difficulty is to assume that two groups are ` effectively ' isolated , so that they do not compete .some candidates for such isolation are searched .the most well - known example is spatial segregation , known as allopatric speciation .since we are here interested in sympatric speciation , this solution can not be adopted here furthermore , there are direct evidences that sympatric speciation really occurred in the evolution , for example in the speciation of cichlid in some lakes(schiliewen et al .1994 ) . as another candidate for separation mating preferenceis discussed ( maynard - smith 1966 , felsenstein 1981 , grant 1981 , doebeli 1996 , howard and barlocher eds .recently , there have appeared some models showing the instability of sexual continuum , without assuming the existence of discrete groups in the beginning .probably , the argument based on the runaway is most persuasive ( lande 1981 , turner and burrows 1995,howard and barlocher eds .even though two groups coexist at the same spatial location , they can be genetically separated if two groups do not mate each other .hence , the mating preference is proposed as a mechanism for sympatric speciation . however , in this theory , why there is such mating preference itself is not answered . accordingly , it is not self - contained as a theory .another recent proposal is the introduction of ( almost ) neutral fitness landscape and exclusion of individuals with similar phenotypes ( dieckmann and doebeli , 1999 , kondrashov and kondrashov,1999 kawata and yoshimura 2001 ) .for example , dieckmann and doebeli[1999 ] have succeeded in showing that two groups are formed and coexist , to avoid the competition among organisms with similar phenotypes , assuming a rather flat fitness - landscape .this provides one explanation and can be relevant to some sympatric speciation . however , it is not so clear how the phenotype that is not so important as a fitness works strongly as a factor for exclusion for a closer value . furthermore , we are more interested in the differentiation of phenotypes that are functionally different and not neutral .so far , in these studies , the interaction between individuals lead to competition for their survival .difficulty in stable sympatric speciation without mating preference lies in the lack of a known clear mechanism how two groups , which have just started to be separated , coexist in the presence of mutual interaction . of course , if the two groups were in a symbiotic state , the coexistence could help the survival of each. however , the two groups have little difference in genotype in the beginning of speciation process , according to the assumption ( iv). then , it would be quite difficult to imagine such a ` symbiotic ' mechanism .now , the problem we address here is as follows : if we do not assume ( iv) ( but by assuming ( i)-(iv ) . ) , is there any mechanism that two groups mutually require each other for the survival in the beginning of the separation of the two groups , in the present paper we propose such mechanism , and provide a sympatric speciation scenario robust against fluctuations .note that the above difficulty comes from the assumption that the phenotype is a single - valued function of genotype .is this single - valued - ness always true ? to address this question , we reconsider the g - p relationship . indeed , there are three reasons that we doubt this single - valued - ness . first , yomo and his colleagues have reported that specific mutants of _ e. coli _ show ( at least ) two distinct types of enzyme activity , although they have identical genes(ko et al . , 1994 ) .these different types coexist in an unstructured environment of a chemostat ( ko et al .1994 ) , and this coexistence is not due to spatial localization .coexistence of each type is supported by each other .indeed , when one type of _ e. coli _ is removed externally , the remained type starts differentiation again to recover the coexistence of the two types .the experiment demonstrates that the enzyme activity of these _ e. coli _ are differentiated into two ( or more ) groups , due to the interaction with each other , even though they have identical genes . herespatial factor is not important , since this experiment is carried out in a well stirred chemostat .second , some organisms are known to show various phenotype from a single genotype .this phenomenon is often related to malfunctions of a mutant ( holmes 1979 ) , and is called as low or incomplete penetrance(opitz 1981 ) .third , a theoretical mechanism of phenotypic diversification has already been proposed as the isologous diversification for cell differentiation(kaneko and yomo , 1994,1997,1999 ; furusawa and kaneko 1998 ) .the theory states that phenotypic diversity will arise from a single genotype and develop dynamically through intracellular complexity and intercellular connection .when organisms with plastic developmental dynamics interact with each other , the dynamics of each unit can be stabilized by forming distinct groups with differentiated states in the pheno - space . herethe two differentiated groups are necessary to stabilize each of the dynamics .otherwise , the developmental process is unstable , and through the interaction the two types are formed again , when there is a sufficient number of units .this theoretical mechanism is demonstrated by several models and is shown to be a general consequence of coupled dynamical systems .the isologous diversification theory shows that there can be developmental ` flexibility ' , in which different phenotypes arise from identical gene sets , as in the incomplete penetrance aforementioned .now we have to study how this theory is relevant to evolution .indeed , the question how developmental process and evolution are related has been addressed over decades ( maynard - smith et al .we consider correspondence between genotype and phenotype seriously , by introducing a developmental process with which a given initial condition is lead to some phenotype according to a given genotype .` development ' here means a dynamic process from an initial state to a matured state through rules associated with genes .( in this sense , it is not necessarily restricted to multicellular organisms . )to consider the evolution with developmental dynamics , it is appropriate to represent phenotype by a set of state variables .for example , each individual has variables , , , which defines the phenotype .this set of variables can be regarded as concentrations of chemicals , rates of metabolic processes , or some quantity corresponding to a higher function characterizing the behavior of the organism .the state is not fixed in time , but develops from the initial state at birth to a matured state when the organism is ready to produce its offspring .the dynamics of the state variables , , is given by a set of equations with some parameters .genes , since they are nothing but information expressed on dna , could in principle be included in the set of variables .however , according to the central dogma of molecular biology ( requisite ( iv ) ) , the gene has a special role among such variables .genes can affect phenotypes , the set of variables , but the phenotypes can not change the code of genes . during the life cycle ,changes in genes are negligible compared with those of the phenotypic variables they control . in terms of dynamical systems , the set corresponding to genes can be represented by parameters that govern the dynamics of phenotypes , since the parameters in an equation are not changed through the developmental process , while the parameters control the dynamics of phenotypic variables .accordingly , we represent the genotype by a set of parameters . only when an individual organism is reproduced , this set of parameters changes slightly by mutation .for example , when represents the concentrations of metabolic chemicals , is the catalytic activity of enzymes that controls the corresponding chemical reaction .now , our model is set up as follows : \(1 ) dynamical change of states giving a phenotype : the temporal evolution of the state variables , , is given by a set of deterministic equations , which are described by the state of the individual , and parameters ( genotype ) , and the interaction with other individuals .this temporal evolution of the state consists of internal dynamics and interaction .( 1 - 1)the internal dynamics ( say metabolic process in an organism ) are represented by the equation governed only of , , ( without dependence on ( ) ) , and are controlled by the parameter sets .( 1 - 2 ) interaction between the individuals : the interaction is given through the set of variables .for example , we consider such interaction form that the individuals interact with all others through competition for some ` resources ' .the resources are taken by all the individuals , giving competition among all the individuals .since we are interested in sympatric speciation , we take this extreme all - to - all interaction , by taking a well stirred soup of resources , without including any spatially localized interaction .\(2 ) reproduction and death : each individual gives offspring ( or splits into two ) when a given ` maturity condition ' for growth is satisfied .this condition is given by a set of variables .for example , if represents cyclic process corresponds to a metabolic , genetic or other process that is required for reproduction , we assume that the unit replicates when the accumulated number of cyclic processes goes beyond some threshold. \(3 ) mutation : when each organism reproduces , the set of parameters changes slightly by mutation , by adding a random number with a small amplitude , corresponding to the mutation rate .the values of variables are not transferred but are reset to initial conditions .( if one wants to include some factor of epigenetic inheritance one could assumed that some of the values of state variables are transferred .indeed we have carried out this simulation also , but the results to be discussed are not altered ( or confirmed more strongly ) .\(4 ) competition : to introduce competition for survival , death is included both by random removal of organisms at some rate as well as by a given death condition based on their state . for a specific example , see appendix .from several simulations satisfying the condition of the model in 2 , we have obtained a scenario for a sympatric speciation process(kaneko and yomo 2000,2002 ) the speciation process we observed is schematically shown in fig.1 , where the change of the correspondence between a phenotypic variable ( p " ) and a genotypic parameter ( g " ) is plotted at every reproduction event .this scenario is summarized as follows . in the beginning, there is a single species , with one - to - one correspondence between phenotype and genotype . here , there are little genetic and phenotypic diversity that are continuously distributed.(see fig.1a ) .we assume that the isologous diversification starts due to developmental plasticity with interaction , when the number of these organisms increase .indeed , the existence of such phenotypic differentiation is supported by isologous diversification , , and is supported by several numerical experiments .this gives the following stage - i . * stage - i : interaction - induced phenotypic differentiation * when there are many individuals interacting for finite resources , the phenotypic dynamics start to be differentiated even though the genotypes are identical or differ only slightly .phenotypic variables split into two ( or more ) types ( see fig.1b ) .this interaction - induced differentiation is an outcome of the mechanism aforementioned .slight phenotypic difference between individuals is amplified by the internal dynamics , while through the interaction between organisms , the difference of phenotypic dynamics tends to be clustered into two ( or more ) types . herethe two distinct phenotype groups ( brought about by interaction ) are called ` upper ' and ` lower ' groups , tentatively .this differentiation is brought about , since the population consisting of individuals taking identical phenotypes is destabilized by the interaction .such instability is for example , caused by the increase of population or decrease of resources , leading to strong competition .of course , if the phenotype at a matured state is rigidly determined by developmental dynamics , such differentiation does not occur .only the assumption we make in the present theory is that there exists such developmental plasticity in the internal dynamics , when the interaction is strong .recall again that this assumption is theoretically supported .note that the difference is fixed at this stage neither at the genetic nor phenotypic level .after reproduction , an individual s phenotype can switch to another type .* stage - ii : amplification of the difference through genotype - phenotype relationship * at the second stage the difference between the two groups is amplified both at the genotypic and at the phenotypic level .this is realized by a kind of positive feedback process between the change of geno- and pheno - types .first the genetic parameter(s ) separate as a result of the phenotypic change .this occurs if the parameter dependence of the growth rate is different between the two phenotypes .generally , there are one or several parameter(s ) , such that the growth rate increases with for the upper group and decreases for the lower group ( or the other way round ) ( see fig.1c and fig.2 ) . certainly , such a parameter dependence is not exceptional . as a simple illustration ,assume that the use of metabolic processes is different between the two phenotypic groups .if the upper group uses one metabolic cycle more , then the mutational change of the parameter to enhance the cycle is in favor for the upper group , while the change to reduce it may be in favor for the lower group .indeed all numerical results support the existence of such parameters .this dependence of growth rate on the genotypes leads to the genetic separation of the two groups , as long as there is competition for survival , to keep the population numbers limited .the genetic separation is often accompanied by a second process , the amplification of the phenotypic difference by the genotypic difference .in the situation of fig.1c , as a parameter increases , a phenotype ( i.e. , a characteristic quantity for the phenotype ) tends to increase for the upper group , and to decrease ( or to remain the same ) for the lower group . it should be noted that this second stage is _ always _ observed in our model simulation when the phenotypic differentiation at the first stage occurred . as a simple illustration ,assume that the use of metabolic processes is different between the two groups .if the upper group uses one metabolic cycle more , then the mutational change of the parameter ( e.g. , enzyme catalytic activity ) to enhance the cycle is in favor for the upper group , while the change to reduce it may be in favor for the lower group . indeed, all the numerical results carried out so far support that there always exist such parameters .this dependence of growth on genotypes leads to genetic separation of the two groups . *stage - iii genetic fixation * after the separation of two groups has progressed , each phenotype ( and genotype ) starts to be preserved by the offspring , in contrast to the situation at the first stage .however , up to the second stage , the two groups with different phenotypes can not exist in isolation by itself .when isolated , offspring with the phenotype of the other group starts to appear .the two groups coexist depending on each other ( see fig.1d ) . only at this third stage ,each group starts to exist by its own . even if one group of units is isolated, no offspring with the phenotype of the other group appears .now the two groups exist on their own .such a fixation of phenotypes is possible through the evolution of genotypes ( parameters ) . in other words ,the differentiation is fixed into the genes ( parameters ) .now each group exists as an independent ` species ' , separated both genetically and phenotypically . the initial phenotypic change introduced by interactionis fixed to genes , and the ` speciation process ' is completed .at the second stage , the separation is not fixed rigidly .units selected from one group at this earlier stage again start to show phenotypic differentiation , followed by genotypic separation , as demonstrated by several simulations .after some generations , one of the differentiated groups recovers the geno- and phenotype that had existed before the transplant experiment .this is in strong contrast with the third stage .at the third stage , two groups with distinct genotypes and phenotypes are formed , each of which has one - to - one mapping from genotype to phenotype .this stage now is regarded as speciation ( in the next section we will show that this separation satisfies hybrid sterility in sexual reproduction , and is appropriate to be called speciation ) . when we look at the present process only by observing initial population distribution ( in fig.1a ) and the final population distribution ( in fig.1d ) , without information on the intermediate stages given by fig .1b ) and 1c ) , one might think that the genes split into two groups by mutations and as a result , two phenotype groups are formed , since there is only a flow from genotype to phenotype .as we know the intermediate stages , however , we can conclude that this simple picture does not hold here . herephenotype differentiation drives the genetic separation , in spite of the flow only from genotype to phenotype .phenotype differentiation is consolidated to genotype , and then the offspring take the same phenotype as their ancestor . notethat the speciation process of ours occurs under strong interaction . at the second stage ,these two groups form symbiotic relationship . as a result ,the speciation is robust in the following sense . if one group is eliminated externally , or extinct accidentally at the first or second stage , the remaining group forms the other phenotype group again , and then the genetic differentiation is started again .the speciation process here is robust against perturbations .note that each of the two groups forms a niche for the other group s survival mutually , and each of the groups is specialized in this created niche .for example , some chemicals secreted out by one group are used as resources for the other , and vice versa .hence the evolution of two groups are mutually related . at the first and second stages of the evolution ,the speed for reproduction is not so much different between the two groups .indeed , at these stages , the reproduction of each group is strongly dependent on the other group , and the ` fitness ' as a reproduction speed of each group by itself alone can not be defined . at the stage - ii, the reproduction of each group is balanced through the interaction , so that one group can not dominate in the population ( see fig.2 ) .if phenotypic differentiation at the stage 1 occurs in our model , then the genetic differentiation of the later stages _ always _ follows , in spite of the random mutation process included .how long it takes to reach the third stage can depend on the mutation rate , but the speciation process itself does not depend on the mutation rate. however small the mutation rate may be , the speciation ( genetic fixation ) always occurs .once the initial parameters of the model are chosen , it is already determined whether the interaction - induced phenotype differentiation will occur or not .if it occurs , then always the genetic differentiation follows . on the other hand , in our setting ,if the interaction - induced differentiation does not exist initially , there is no later genetic diversification process . if the initial parameters characterizing nonlinear internal dynamics or the coupling parameters characterizing interaction are small , no phenotypic differentiation occurs . also , the larger the resource per individual is , the smaller the effective interaction is .then , phenotypic differentiation does not occur . in these cases ,even if we take a large mutation rate , there does not appear differentiation into distinct genetic groups , although the distribution of genes ( parameters ) is broader . or, we have also made several simulations starting from a population of units with widely distributed parameters ( i.e. , genotypes ) . however ,unless the phenotypic separation into distinct groups is formed , the genetic differentiation does not follow .( fig.4a and b ) only if the phenotype differentiation occurs , the genetic differentiation follows(fig.4c ) .for some other models with many variables and parameters , the phenotypes are often distributed broadly , but continuously without making distinct groups . in this case again , there does not appear distinct genetic groups , through the mutations , although the genotypes are broadly distributed .( see fig.5 ) .the speciation process is defined both by genetic differentiation and by reproductive isolation ( dobzhansky 1937 ) .although the evolution through the stages i - iii leads to genetically isolated reproductive units , one might still say that it should not be called ` speciation ' unless the process shows isolated reproductive groups under the sexual recombination .in fact , it is not trivial if the present process works with sexual recombination , since the genotypes from parents are mixed by each recombination . to check this problem ,we have considered some models so that the sexual recombination occurs to mix genes . to be specific, the reproduction occurs when two individuals and satisfy the maturity condition , and then the two genotypes are mixed . as an example we have produced two offspring and , from the individuals and as with a random number to mix the parents genotypes ( see also appendix 11.2 ) . in spite of this strong mixing of genotype parameters ,the two distinct groups are again formed .of course , the mating between the two groups can produce an individual with the parameters in the middle of the two groups .when parameters of an individual take intermediate values between those of the two groups , at whatever phenotypes it can take , the reproduction takes much longer time than those of the two groups . before the reproduction condition is satisfied , the individual has a higher probability to be removed by death .as the separation process to the two groups further progresses , an individual with intermediate parameter values never reaches the condition for the reproduction before it dies .this post - mating isolation process is demonstrated clearly by measuring the average offspring number of individuals over given parameter ( genotype ) ranges and over some time span .an example of this average offspring number is plotted in fig.3 , with the progress of the speciation process .as the two groups with distinct values of parameters are formed , the average offspring number of an individual having the parameter between those of the two groups starts to decrease .soon the number goes to zero , implying that the hybrid between the two groups is sterile . in this sense ,sterility ( or low reproduction ) of the hybrid appears as a * result , without any assumption on mating preference*. now genetic differentiation and reproductive isolation are satisfied .hence it is proper to call the process through the stages i - iii as speciation .so far we have not assumed any preference in mating choice . hence , a sterile hybrid continues to be born .then it is natural to expect that some kind of mating preference evolves to reduce the probability to produce a sterile hybrid .here we study how mating preference evolves as a result of post - mating isolation . as a simple example , it is straightforward to include loci for mating preference parameters .we assume another set of genetic parameters that controls the mating behavior .for example , each individual has a set of mating threshold parameters , corresponding to the phenotype . if for some , the individual denies the mating with even if and satisfy the maturity condition . in simulation with a model with , we choose a pair of individuals that salsify the maturity condition , and check if one does not deny the other . only if neither denies the mating with the other , the mating occurs to produce offspring , when the genes from parents are mixed in the same way as as in the previous section . if these conditions are not satisfied , the individuals and wait for the next step to find a partner again ( see also appendix 11.3 ) . herethe set of is regarded as a set of ( genetic ) parameters , and changes by mutation and recombination .the mutation is given by addition of a random value to .initially all of ( for ) are smaller than the minimal value of , so that any mating preference doe not exist .if some gets larger than some of , there appears mating preference .an example of numerical results is given in fig.5 , where the change of phenotype and some of the parameters , are plotted . here, by the phenotype differentiation , one group ( to be called ` up ' group ) has a large value for some and almost null values for some other .hence , sufficiently large positive gives a candidate for mating preference .right after the formation of two genetically distinct groups that follows the phenotype separation , one of the mating threshold parameters ( ) starts to increase for one group . in the example of the figure , ` up ' group has phenotype with ( large , small ) and the other ( ` down ' ) group with ( small , large ) .there the ` up ' group starts to increase , and is satisfied for an individual of the ` down ' group .now the mating between the two groups is no more allowed , and the mating occurs only within each group .the mating preference thus evolved prohibits inter - species mating producing sterile hybrid .note that the two groups do not simultaneously establish the mating preference . in some case, only one group has positive , which is enough for the establishment of mating preference , while in some other cases one group has positive , and the other has positive , where the mating preference is more rigidly established .although the evolution of mating preference here is a direct consequence of the post - mating isolation , it is interesting to note that the coexistence of the two species is further stabilized with the establishment of mating preference . without this establishment , there are some cases that one of the species disappears due to the fluctuation after very long time in the simulation . with the establishment ,the two species coexist much longer ( at least within our time of numerical simulation ) .in diploid , there are two alleles , and two alleles do not equally contribute to the phenotype .for example , often only one allele contributes the control of the phenotype . if by recombination , the loci from two alleles are randomly mixed , then the correlation between genotype and phenotype achieved by the mechanism so far discussed might be destroyed .indeed , this problem was pointed out by felsenstein ( 1981 ) as one difficulty for sympatric speciation .of course , this problem is resolved , if genotypes from two alleles establish high correlation . to checkif this correlation is generated , we have extended our model to have two alleles , and examined if the two alleles become correlated . here , we adopted the model studied so far , and added two alleles further ( see also appendix 11.4 ) . in mating , the alleles from the parentsare randomly shuffled for each locus . in other words ,each organism has two sets of parameters and .each is inherited from either or of one of the parents , and the other is inherited from either of of the other parents . hereparameters at only one of the alleles work as a control parameter for the developmental dynamics of phenotype .we have carried out some simulations of this version of our model ( kaneko , unpublished ) . here again , the speciation proceeds in the same way , through the stages i , ii , and iii .hence our speciation scenario works well in the presence of alleles . in this model , the genotype - phenotype correspondence achieved at the stage iii ,could be destroyed if there were no correlation between two alleles .hence we have plotted the correlation between two alleles by showing two - dimensional pattern in fig.7 .initially there was no correlation , but through temporal evolution , the correlation is established . in other words ,the speciation in phenotype is consolidated to genes , and later is consolidated to the correlation between two alleles .as already discussed , our speciation proceeds , starting from phenotypic differentiation , then to genetic differentiation , and then to post - mating isolation , and finally to pre - mating isolation ( mating preference ) .this ordering might sound strange from commonly adopted viewpoint , but we have shown that this ordering is a natural and general consequence of a system with developmental dynamics with potential plasticity by interaction . in a biological system , we often tend to assume * causal relationship * between two factors , from the observation of just * correlation * of the two factors .for example , when the resident area of two species , which share a common ancestor species , is spatially separated , we often guess that the spatial separation is a cause for the speciation .indeed , allopatric speciation is often adopted for the explanation of speciation in nature .however , in many cases , what we observed in field is just correlation between spatial separation and speciation . which is the cause is not necessarily proved .rather , spatial segregation can be a result of ( sympatric ) speciation . by extending our theory so far, we can show that spatial separation of two species is resulted from the sympatric speciation discussed here . to study this problem ,we have extended our model by allocating to each organism a resident position in a two - dimensional space .each organism can move around the space randomly but slowly , while resources leading to the competitive interaction diffuse throughout space much faster .if the two organisms that satisfy the maturation condition meet in the space ( i.e , they are located within a given distance ) , then they mate each other to produce offspring . in this model, we have confirmed that the sympatric speciation first occurs through the stages i - iii in 3 .later these two differentiated groups start to be spatially segregated , as shown in fig.8 .now sympatric speciation is shown to be consolidated to spatial segregation ( kaneko , in preparation ) .the spatial segregation here is observed when the range of interaction is larger than the typical range of mating .for example , if mobility of resources causing competitive interaction is larger than the mobility of organisms , spatial segregation of symptarically formed species is resulted . instead of spatially local mating process, one can assume slight gradient of environmental condition , for example , as gradient in resources .in this case again , sympatric speciation is expected to be later fixed to spatial separation . to sum up ,we have pointed out here the possibility that some of speciation that are considered to be allopatric can be a result of sympatric speciation of our mechanism . the sympatric speciation is later consolidated to spatial segregation of organisms .our theory reviewed here is related with several earlier theories , but is conceptually different . herewe will briefly discuss these points .since our mechanism crucially depends on the interaction , one might think that it is a variant of frequency - dependent selection .the important difference here is that phenotype may not be uniquely determined by the genotype , even though the environment ( including population of organisms ) is given . in the frequency - dependent selection ,genetically ( and accordingly phenotypically ) different groups interact with each other , and the fitness depends on the population of each group ( futsuyma , 1986 ) . at the third stage of our theory ,the condition for this frequency - dependent selection is satisfied , and the evolution progresses with the frequency - dependent selection .however , the important point in our theory lies in the earlier stages where a single genotype leads to different phenotypes .indeed this intrinsic nature of differentiation is the reason why the speciation process here works at any ( small ) mutation rate and also under sexual recombination , without any other ad hoc assumptions . in our theory phenotype changeis later consolidated to genotype .indeed , genetic ` takeover ' of phenotype change was also discussed as baldwin s effect , where the displacement of phenotypic character is fixed to genes . in the discussion of baldwin s effect ,the phenotype character is given by epigenetic landscape ( waddington,1957 ) . in our case, the phenotype differentiation is formed through developmental process to generate different characters due to the interaction .distinct characters are stabilized each other through the interaction . with this interaction dependence ,the two groups are necessary with each other , and robust speciation process is resulted . hence , the fixation to genotype in our theory is related with baldwin s effect , but the two are conceptually different . since the separation of two groups with distinct phenotypes is supported by the interaction , the present speciation mechanism is possible without supposing any mating preference .in fact , the hybrid becomes inferior in the reproduction rate , and the mating preference based on the discrimination in phenotype is shown to evolve .indeed , a mechanism to amplify the differentiation by mating preference was searched for as reinforcement since dobzhansky[1951 ] .our theory also gives a plausible basis for the evolution of mating preference without assuming ad - hoc reinforcement , or without any presumption on the inferiority in hybrid .note that our phenotypic differentiation through development is different from the so called ` phenotypic plasticity ' , in which a single genotype produces * alternative phenotypes * in * alternative environments*(callahan , pigliucci and schlichting 1987 ; spitze and sadler 1996 ; weinig 2000 ) .in contrast , in our case , distinct phenotypes from a single genotype are formed * under the same environment*. in fact , in our model , this phenotypic differentiation is necessary to show the later genetic differentiation . without this differentiation , even if distinct phenotypes appear for different environments as in ` phenotypic plasticity ' , genetic differentiation does not follow . in spite of this difference , it is true that both are concerned with flexibility in phenotypes. some of phenotypic plasticity so far studied may bring about developmental flexibility of ours , under a different environmental condition . in our case ,competitive interaction is relevant to speciation .indeed , coexistence of two ( or more ) species after the completion of the speciation is discussed as the resource competition by tilman[1976,1981 ] .although his theory gives an explanation for the coexistence , the speciation process is not discussed , because two individuals with a slight genotypic difference can have only a slight difference there . in our theory, even if the genotypes of two individuals are the same or only slightly different , their phenotypes can be of quite different types .accordingly , our theory provides a basis for resource competition also .general conclusion of our theory is that sympatric speciation can generally occur under strong interaction , if the condition for interaction - induced phenotype differentiation is satisfied .we briefly discuss relevance of our theory to biological evolution . since the present speciation is triggered by interaction , the process is not so much random as deterministic .once the interaction among individuals brings about phenotypic diversification , speciation always proceeds directionally without waiting for a rare , specific mutation .the evolution in our scenario has a ` deterministic ' nature and a fast tempo for speciation , which is different from a typical ` stochastic ' view of mutation - driven evolution .some of the phenotypic explosions in the history of evolution have been recorded as having occurred within short geologic periods . following these observations ,punctuated equilibrium was proposed [ gould and eldegridge 1977 ] .our speciation scenario possibly gives an interpretation of this punctuated equilibrium .it may have followed the deterministic and fast way of interaction - induced speciation . in the process of speciation ,the potentiality of a single genotype to produce several phenotypes is consumed and may decline . after the phenotypic diversification of a single genotype , each genotype newly appears by mutation and takes one of the diversified phenotypes in the population .thus , the one - to - many correspondence between the original genotype and phenotypes is consumed . through the present process of speciation , the potentiality of single genotypes to produce various phenotypes decreases unless the new genotypes introduce another positive feedback process to amplify the small difference . as a result, one may see single genotypes expressing only one ( or a small number of ) phenotypes in nature .since most organisms at the present time have gone through several speciation processes , they may have reduced their potentiality to produce various phenotypes . according to our theory ,if the organisms have a high potentiality , they will undergo a speciation process before long and the potentiality will decrease . in other words ,natural organisms tend to lose the potentiality to produce various phenotypes in the course of evolution . as a reflection on the evolutionary decline of the potentiality, one can expect that mutant genotypes tend to have a higher potentiality than the wild - type genotype . as mentioned in 1 , the low or incomplete penetrance(opitz 1981 )is known to often occur in mutants , compared with higher penetrance in a wild type .our result is consistent with these observation , since wild types are in most cases , a consequence of evolution , where the one - to - one correspondence is recovered , while the mutants can have higher potentiality to have a loose correspondence .relationship between development and evolution has been discussed extensively .our theory states the relevance of developmental plasticity to speciation . taking our results and experimental facts into account, one can predict that organisms emerging as a new species have a high potentiality to produce a variety in phenotypes .it is interesting to discuss why insects , for example , have higher potentiality to speciation from this viewpoint . also examining if living fossils , such as _ latimeria chalumnae _ , _ limulus _ and so forth , have a stable expression of a small number of phenotypes . in our speciation theory ,plasticity is declined through the evolution .of course , there should be some occasions when the potentiality is regained , so that the evolution continues .for example , change of environment may influence the developmental dynamics to regain loose correspondence , or introduction of novel degrees of freedom or genes may provide such looseness .endosymbiosis can be one of such causes . also ,change of the interaction through spatial factor may introduce novel instability in dynamics , resulting in the loose correspondence .one important point in our theory is that the speciation in asexual and sexual organisms are explained within the same theory .of course , the standard definition of species using hybrid sterility is applied only for sexual organisms .however , it is true that the asexual organisms , or even bacteria , exhibit discrete geno- and pheno - types .it is suggested that ` species ' , i.e. , discrete types with reproductive isolation , may exist in asexual organisms ( roberts and cohan 1995 , holman 1987 ) .there are also discussions that the potentiality of speciation in asexual organisms is not lower than the sexual organisms . in this sense, the present theory sheds a new light to the problem of speciation in asexual organisms as well .according to our theory , sympatric speciation under sexual reproduction starts first from phenotypic differentiation , and then genetic diversification takes place , leading to hybrid sterility , and finally the speciation is fixed by mating preference .this order may be different from studies most commonly adopted .hence , our theory will be verified by confirming this chronic order in the field .one difficulty here , however , lies in that the process from phenotypic differentiation to the last stage is rather fast according to our theory .still , it may be possible to find this order in the field , by first searching for phenotypic differentiation of organisms with identical genotype and under the identical environment . in this respect, the data of cichlid of nicaraguan lake may be promising ( wilson , noack - kunnmann , and meyer 2000 ) , since phenotypic difference corresponding to different ecological niche is observed even though clear genetic difference is not observed yet .discussion on the mechanism of evolution using past data , however , often remains anyone s guess .most important in our scenario , in contrast , is its experimental verifiability , since the process of speciation is rather fast . for example, the evolution of _ e. coli _ is observed in the laboratory , as has been demonstrated by kashiwagi et al.(1998 , 2001 ) and w .- z .xu et al.(1996 ) . as mentioned in 1 , phenotypic differentiation of _ e. coli_ is experimentally observed when their introduction is strong .since the strength of interaction can be controlled by the resources and the population density , one can check whether or not the evolution in genetic level is accelerated through interaction - induced phenotypic diversification ( kashiwagi et al . ,examination of the validity of our speciation scenario will give a first step to such study . to sum up, we have shown that developmental plasticity induced by interaction leads to phenotypic differentiation , which is consolidated to genes .thus , distinct species with distinct geno- and pheno types are formed .this leads to hybrid sterility , and later mating preference evolves . further later, this differentiation can be fixed to correlation in alleles or to spatial segregation . how the original differentiation in phenotypes can be understood as symmetry breaking from a homogeneous state , in the term of physics , while successive consolidation of the broken symmetry to different properties observed at later stages is more important for biological evolution .this dynamic process of consolidation is a key issue in development and evolution ( see also ( newman , 2002 ) ) .* acknowledgment * the author would like to thank tetsuya yomo for collaboration in studies on which the present paper is based .he would also like to thank hiroaki takagi and chikara furusawa for useful discussions , and masakazu shimada , jin yoshimura , masakado kawata , and stuart newman for illuminating suggestions .the present study is supported by grants - in - aid for scientific research from the ministry of education , culture , sports , science and technology of japan ( 11ce2006 ) .we study a simple abstract model of evolution with an internal dynamical process for development . in the model, each individual has several ( metabolic or other ) cyclic processes , and the state of the -th process at time is given by . with such processes , the state of an individual is given by the set , , , which defines the phenotype .this set of variables can be regarded as concentrations of chemicals , rates of metabolic processes , or some quantity corresponding to a higher function .the state changes temporally according to a set of deterministic equations with some parameters . to be specific, our toy model consists of the following dynamics : \(1 ) dynamics of the state : here , we split into its integer part and the fractional part ] , with small , corresponding to the mutation rate . in the present model , due to the nonlinear nature of the dynamics , often oscillates in time chaotically or periodically .hence it is natural to use including its integer part , as a representation of the phenotype , since its integer part represents the number of cyclic process used for reproduction .we have also carried out some simulations of a model with reaction network , where represents the chemical concentration of an individual .each individual gets resources depending on its internal state . through the above catalytic reaction process ,some products are synthesized from the resources . when they are beyond a given threshold , they split to two , as given in the model for isologous diversification ( kaneko and yomo , 1994,1997,1999 , furusawa and kaneko , 1998 ) .with the increase of the number of individuals , they compete for resources , while they are removed randomly to include competition .since genes code the catalytic activity of enzymes , the rate of each reaction in the catalytic network is controlled by a gene . hence , as a genetic parameter , the parameter for each reaction rate is adopted . through the mutation to this reaction rate ,the speciation process discussed throughout the paper is also observed ( takagi , kaneko , yomo 2000 ) . to include sexual recombination ,we have extended our model so that organisms satisfying the threshold condition mate to reproduce two offspring .when they mate , the offspring have parameter values that are intermediate of those of the parents . here , the offspring and are produced from the individuals and as here the set of is introduced as a set of ( genetic ) parameters , and changes by mutation and recombination . the mutation is given by addition of a random value over ] to per each step .they move within a square of a given suze with a periodic boundary condition . if two individuals and that satisfy the maturity condition ( ) are within a given distance , they can reproduce two offspring , which are located between and . 1 .c. a. beam , r. m. preparata , m. himes , d. l. nanney , ribosomal rna sequencing of members of the crypthecodinium cohnii ( dinophyceae ) species complex ; comparison with soluble enzyme studies .journal of eukaryotic microbiology .40(5):660 - 667 , ( 1993 ) .2 . callahan h.s . , pigliucci m. , and schlichting c.d .1997 , developmental phenotypic plasticity : where ecology and evolution meet molecular biology , bioessays * 19 * 519 - 525 3 .coyne j.a ., & orr h.a . , the evolutionary genetics of speciation " , phil . trans .london * b 353 * 287 - 305 ( 1998 ) 4 .darwin c. _ on the origin of species by means of natural selection or the preservation of favored races in the struggle for life _( murray , london,1859 ) .dieckmann u. & doebeli m. , on the origin of species by sympatric speciation " , _ nature _ * 400 * 354 - 357 ( 1999 ) 6. doebeli m. a quantitative genetic competition model for sympatric speciation " j. evol .* 9 * 893 - 909 ( 1996 ) 7 .dobzhansky t. , _ genetics and the origin of species _ ( columbia univ . press .( 1937,1951 ) 8 .felsenstein j. 1981 , skepticism towards santa rosalia , or why are there so few kinds of animals ? , evolution * 35 * 124 - 138 9 .furusawa c. & kaneko k. , emergence of rules in cell society : differentiation , hierarchy , and stability " bull.math.biol .60 ; 659 - 687 ( 1998 ) 10 .d. j. futsuyma , _ evolutionary biology second edition _ , sinauer associates inc . ,sunderland , mass ( 1986 ) . 11 .gilbert s.f ., opitz j.m . , and raff r.a .1996 , resynthesizing evolutionary and developmental biology , developmental biol .* 173 * 357 - 372 12 .gould s.j . , andeldredge n. punctuated equilibria : the tempo and mode of evolution reconsidered " , _ paleobiology _ * 3 * , 115 - 151 ( 1977 ) 13 .holman e. , 1987 recognizability of sexual and asexual species of rotifers , syst . zool . * 36 * 381 - 386 14 .holmes l.b . , penetrance and expressivity of limb malformations " _ birth defects .ser . _ * 15 * , 321 - 327 ( 1979 ) . 15 .howard and s.h .berlocher ( eds . ) _ endless form : species and speciation _ , oxford univ . press .( 1998 ) 16 .kaneko k. clustering , coding , switching , hierarchical ordering , and control in network of chaotic elements " physica 41 d , 137 - 172 ( 1990 ) 17 .kaneko k. relevance of clustering to biological networks " , physica 75d , 55 - 73 ( 1994 ) 18 .kaneko k. , coupled maps with growth and death : an approach to cell differentiation " , physica 103 d ; 505 - 527 ( 1998 ) 19 .kaneko k. & yomo t. , cell division , differentiation , and dynamic clustering " , physica 75 d , 89 - 102 ( 1994 ) .kaneko k. & yomo t , isologous diversification : a theory of cell differentiation " , bull.math.biol .59 , 139 - 196 ( 1997 ) 21 .kaneko k. & yomo t , isologous diversification for robust development of cell society " , j. theor .biol . , 199 243 - 256 ( 1999 ) 22 .kaneko k. & yomo t , symbiotic speciation from a single genotype " , proc .b , 267 , 2367 - 2373 ( 2000 ) 23 .kaneko k. & yomo t , symbiotic sympatric speciation through interaction - driven phenotype differentiation " , evol .( 2002 ) , in press 24 .kashiwagi a. , kanaya t. , yomo t. , urabe i. , how small can the difference among competitors be for coexistence to occur " , _ researches on population ecology _ * 40 * , 223 ( 1998 ) .kashiwagi a. , noumachi w. , katsuno m. , alam m.t ., urabe i. , and yomo t. plasticity of fitness and diversification process during an experimental molecular evolution " , j. mol . evol . , ( 2001 ) in press 26 .kawata m. & yoshimura j. , _ speciation by sexual selection in hybridizing populations without viability selection _ , ev.ec .res . 2 : ( 2000 ) 897 - 909 27 .ko e. , yomo t. , & urabe i. , dynamic clustering of bacterial population " physica 75d , 81 - 88 ( 1994 ) 28 .kobayashi c. , suga y. , yamamoto k. , yomo t. , ogasahara k. , yutani k. , and urabe i j. biol .272 , 23011 - 6 , thermal conversion from low- to high - activity forms of catalase i from bacillus stearothermophilus " ( 1997 ) 29 .kondrashov a.s . & kondrashov a.f , interactions among quantitative traits in the course of sympatric speciation " , _ nature _ * 400 * 351 - 354 ( 1999 ) 30 .lande r. , models of speciation by sexual selection on phylogenic traits " , proc . natl .usa * 78 * 3721 - 3725 ( 1981 ) 31 .maynard - smith j. , sympatric speciation " , the american naturalist * 100 * 637 - 650 ( 1966 ) 32 .maynard - smith j. and szathmary e. , _ the major transitions in evolution _ ( w.h.freeman , 1995 ) 33 .maynard - smith j. , burian r. , kauffman s. , alberch p. , campbell j. , goodwin b. , lande r. , raup d. , and wolpert l. , 1985 developmental constraints and evolution , q. rev .60 * 265 - 287 34 .newman , _ from physics to development : the evolution of morphogenetic mechanism _( to appear in origins of organismal form " , eds .mller and s.a .newman , mit press , cambridge , 2002 ) 35 .j.m.opitz , some comments on penetrance and related subjects " , _ am - j - med - genet . _ * 8 * 265 - 274 ( 1981 ) . 36 .roberts m.s . , and cohan f.m . , 1995 recombination and migration rates in natural populations of _ bacillus subtilis _ and _ bacillus mojavensis _ ,evolution * 49 * : 1081 - 1094 37 .rosenzweig , competitive speciation " , biol . j. of linnean soc . * 10 * , 275 - 289 , ( 1978 ) 38 .schiliewen , u.k ., tautz , d. & pbo s. , sympatric speciation suggested by monophly of crater lake cichilids , nature * 368 * , 629 - 632 ( 1994 ) 39 .spitze k. and sadler t.d ., 1996 , evolution of a generalist genotype : multivariate analysis of the adaptiveness of phenotypic plasticity , am .nat . * 148 * , 108 - 123 40 .takagi , h. , kaneko k. , & yomo t. evolution of genetic code through isologous diversification of cellular states " , artificial life , 6 ( 2000 ) 283 - 305 .tilman , d. ecological competition between algae : experimental confirmation of resource - based competition theory " , _ science _ * 192 * 463 - 465 ( 1976 ) 42 .tilman , d. test of resource competition theory using four species of lake michigan algae " , _ ecology _ * 62 * , 802 - 815 ( 1981 ) 43 .turner g.f . ,& burrows m.t . a model for sympatric speciation by sexual selection " ,london * b 260 * 287 - 292 ( 1995 ) 44 .waddington c.h . ,_ the strategy of the genes _ , ( george allen & unwin ltd . , bristol , 1957 ) 45weinig c. , 2000 , plasticity versus canalization : population differences in the timing of shade - avoidance responses evolution * 54 * 441 - 451 46 .wilson a.b ., noack - kunnmann k. , and meyer a. `` incipient speciation in sympatric nicaraguan crater lake cichlid fishes : sexual selection versus ecological diversification '' proc .london * b 267 * 2133 - 2141 ( 2000 ) 47 .xu , a. kashiwagi , t. yomo , i. urabe , fate of a mutant emerging at the initial stage of evolution " , _ researches on population ecology _ * 38 * , 231 - 237 ( 1996 ) . | a mechanism of sympatric speciation is presented based on the interaction - induced developmental plasticity of phenotypes . first , phenotypes of individuals with identical genotypes split into a few groups , according to instability in the developmental dynamics that are triggered with the competitive interaction among individuals . then , through mutational change of genes , the phenotypic differences are fixed to genes , until the groups are completely separated in genes as well as phenotypes . it is also demonstrated that the proposed theory leads to hybrid sterility under sexual recombination , and thus speciation is completed in the sense of reproductive isolation . as a result of this post - mating isolation , the mating preference evolves later . when there are two alleles , the correlation between alleles is formed , to consolidate the speciation . when individuals are located in space , different species are later segregated spatially , implying that the speciation so far regarded to be allopatric may be a result of sympatric speciation . relationships with previous theories , frequency - dependent selection , reinforcement , baldwin s effect , phenotypic plasticity , and resource competition are briefly discussed . relevance of the results to natural evolution are discussed , including punctuated equilibrium , incomplete penetrance in mutants , and the change in flexibility in genotype - phenotype correspondence . finally , it is discussed how our theory is confirmed both in field and in laboratory ( in an experiment with the use of _ e coli . _ ) . key words : dynamical systems , development , phenotypic plasticity , post - mating isolation , mating preference , genotype - phenotype mapping e - mail : kaneko.c.u-tokyo.ac.jp |
the mathematical study of enzyme kinetics ( or michaelis - menten kinetics ) is mainly based on the standard quasi - steady state approximation ( sqssa ) - which has been used in biochemistry , since the pioneering papers by bodenstein and chapman and underhill - which starts from the observation that enzyme reactions are characterized by a first , short transient phase , where the intermediate complex rapidly grows , and a second , longer quasi - equilibrium phase , where the complex slowly decays in the product , in general the activated substrate .each phase of the reaction has a time scale ( and , respectively ) .the quasi - steady state approximation is a very efficient way of simplification for the description of a typical saturation phenomenon , occurring in enzyme kinetics , but present in several other biological systems .let us cite , just as non exhaustive examples , the monod - wyman - changeux molecular model of cooperativity in allosteric reactions , or , more recently , the model of lekszycki and coworkers concerning the bone regeneration and the model of infarcted cardiac tissue regeneration by means of stem cells , where saturation phenomena are observed .the reaction can be described as follows .let us consider an enzyme , , which reacts with a protein , , resulting in an intermediate complex , .in turn , this complex can break down into a product , , and the enzyme .it is frequently assumed that the formation of is reversible while its breakup is not .the process is represented by the following sequence of reactions where are the reaction rates . for notational convenience we will use variable names to denote both a chemical species and its concentration . for example, denotes both an enzyme and its concentration .reaction ( [ eq : a1 ] ) obeys two natural constrains : the total amounts of protein and enzyme remain constant .therefore , for positive constants and . in conjunction with the constraints ( [ eq : a2 ] ) , the following cauchy problem for a system of two ordinary differential equations can be used to model reaction ( [ eq : a1 ] ) : \notag \\ & x(0)=x_t , \quad c(0)=0.\end{aligned}\ ] ] where and where , , , , are viewed as fixed positive constants and is the michaelis affinity constant .similarly , we can define the dissociation constant and the van slyke - cullen constant .since , after a short transient , where the complex rapidly grows , reaching its maximal concentration , it slowly decays , the sqssa consists in supposing that , after the transient phase , the complex can be considered in a quasi - equilibrium , i.e. , posing . with this approximation ,the system becomes the differential - algebraic system ( ) where only the initial condition can be imposed , because the sqssa describes only the slow phase , where the initial value of is its maximal value , instead of . in the sixties of the last century mathematicians ( see , in particular , ) interpreted the sqssa in terms of leading order term of asymptotic expansions with respect to a perturbation parameter , which must be supposed small .heineken et al . used , because in literature it is widely used to impose that the initial concentration of the enzyme is much less than the concentration of the substrate .the parameter can also arise by virtue of a biochemical condition imposing the separation between the two timescales and characterizing the reaction ( see also ) . in this way, segel - slemrod showed that the sqssa can be obtained also as the leading order of an asymptotic expansion in terms of , enlarging the parameter range of validity of the sqssa .inspired by the papers by laidler , swoboda , and schauer and heinrich , borghans et al . introduced a different approximation , called total quasi - steady state approximation ( tqssa ) which uses the new variable , called total substrate . formally , introducing the `` lumped '' variable , problem ( [ eq : a3 ] )can be rewritten as , \notag \\ & \overline{x}(0)=x_t , \quad c(0)=0.\end{aligned}\ ] ] also the tqssa posits that equilibrates quickly compared to . imposing also in this case a quasi - steady state approximation ( ) ,we obtain where is the only biologically allowed solution of .let us remark that since , thanks to the conservations laws , , the tqssa can be viewed as the other side of the coin of laidler s theory , though the approach followed in implicitly contains much more information about the reliability of the approximation , as shown in . also the tqssa can be seen as the leading order term of an asymptotic expansion in terms of a suitable parameter , , producing a new approximation , which is valid in a much wider parameter range .the parameter , introduced in , appears already in a paper by palsson , where the author determines sufficient conditions for the application of any quasi - steady state approximation , based again on the time scale separation .taking into account that the perturbation parameter is always less than , its introduction in terms of time scale separation appears much more natural than the previous parameters .this result gives a theoretical mathematical foundation of the choice of the parameter in the tqssa .moreover , several authors ( see , for example , ) study the transient phase of the reaction supposing that in this phase does not change substantially .this hypothesis is not realistic , while , using the total substrate , we observe that at time 0 , we have , which addresses much more naturally the request of small changes of the total substrate in the initial time of the reaction . in figure [ figura1 ]we show the different efficiency of the two quasi - steady state approximations , when the parameters are stressed in such a way that the sqssa is no more valid . ) , with their sqssa ( [ eq : a3bis ] ) and tqssa ( [ eq : tqssa_single ] ) .the parameter set is the following : .the inadequacy of the sqssa , mainly in the first part of the reaction , is evident , while the tqssa in indistinguishable from the numerical solution of the system.,title="fig:",scaledwidth=50.0% ] ) , with their sqssa ( [ eq : a3bis ] ) and tqssa ( [ eq : tqssa_single ] ) .the parameter set is the following : .the inadequacy of the sqssa , mainly in the first part of the reaction , is evident , while the tqssa in indistinguishable from the numerical solution of the system.,title="fig:",scaledwidth=50.0% ] in previous literature the different qssas are approached by means of two different tools : tihonov s theorem , which studies the asymptotic stability of systems of differential equations characterized by the presence of small perturbative parameters and center manifold theory , which is one of the most powerful tools to study the dimensional reduction of differential systems .for example , on the one hand , heineken et al . and dvok and ika quote tihonov s theorem in order to justify the sqssa , while khho and hegland refer to this theorem to apply the tqssa ; on the other hand , other authors interpret the sqssa and the tqssa , respectively , as the slow manifold of the michaelis - menten kinetics .however , at the best of our knowledge , the two approaches are not yet compared , in order to check whether there exist any equivalences between the so - called singular points , introduced by tihonov , and the center manifolds , as studied , for example , by carr .applying the techniques exposed in , we show that the two approximations are asymptotically equivalent , concluding that the sqssa and the tqssa can be interpreted both as the singular point of the michaelis - menten kinetics and its center manifold .this means that tihonov s theorem implies that any qssa can be mathematically interpreted as the study of the reduced system of the original system setting the perturbative parameter , instead of setting the derivative of the complex equal to zero .this fact formally allows the application of a `` mechanistic '' passage , consisting in equating to zero the derivatives of the complexes , in the single reaction scheme , as in more complex reactions , because this is the simplest way to reach ( an approximate ) expression of the center manifold .for example , wang and sontag apply this technique for the study of the sqssa of the double phosphorylation - dephosphorylation cycle . as shown in ,however , these approximation are no more applicable to mechanisms where oscillations can appear and an a priori analysis of the application should be performed every time we have to deal with any qssa .the paper is organized as follows in section 2 we recall all the main definitions and properties of tihonov s theory and of the center manifold theory ; in section 3 we show the equivalence of the two approaches in the case of the sqssa , of the tqssa and for a class of more general systems of differential equations characterized by the presence of a small , perturbative parameter ; in section 4 we discuss some future applications of this results to more complex enzymatic reactions .in this section we introduce the notations and summarize the results we need for the formulation of the problem investigated here . for convenience of the readerwe closely follow the notations of the fundamental book by wiggins .we will investigate systems in the class of general autonomous vector fields it is natural to consider the linearized system associated to the vector field ( [ eq:1 ] ) , if is one of its fixed points , and the constant jacobian matrix .then can be represented as the direct sum of three subspaces denoted , , and , which are defined as follows : where , , are the ( generalized ) eigenvectors of corresponding to the eigenvalues having negative real part , positive real part and zero real part , respectively . , , and are referred to as the stable , unstable , and center subspaces , respectively .they are invariant subspaces ( or manifolds ) since solutions of ( [ eq:7 ] ) with initial conditions entirely contained in either , , or must forever remain in that particular subspace for all time .it is well known that there exists a linear transformation which transforms the linear equation ( [ eq:7 ] ) into block diagonal form where , , is an matrix having eigenvalues with negative real part , is an matrix having eigenvalues with positive real part , and is an matrix having eigenvalues with zero real part .the in the block diagonal form ( [ eq:14 ] ) indicate appropriately sized block consisting of all zero s . using this same linear transformation to transform the coordinates of the nonlinear vector field ( [ eq:1 ] )gives the equation where , , and are the first , , and components , respectively , of the vector .the following theorem shows how this structure changes when the nonlinear vector field ( [ eq:13 ] ) is considered .it is stated without proof ( see for details ) .[ th:2 ] suppose ( [ eq:13 ] ) is , .then the fixed point of ( [ eq:13 ] ) possesses a s - dimensional local , invariant stable manifold , , a u - dimensional local , invariant unstable manifold , , and a c - dimensional local , invariant center manifold , , all intersecting in .these manifolds are all tangent to the respective invariant subspaces of the linear vector field ( [ eq:14 ] ) at the origin and , hence , are locally representable as graphs . in particular , we have where , , , , , and are functions .moreover , trajectories in and have the same asymptotic properties as trajectories in and , respectively .namely , trajectories of ( [ eq:13 ] ) with initial conditions in ( resp . , ) approach the origin at an exponential rate asymptotically as ( resp . , ) .if the eigenvalues of the center subspace are all precisely zero - rather than having just real part zero - then a center manifold is called a _ slow manifold_. if , then any orbit will rapidly decay to .thus , in order to investigate the long - time behavior ( i.e. , stability ) we need only to investigate the system restricted to . this simple reasoning is the foundation of the `` reduction principle '' applied to the study of the stability of nonhyperbolic fixed points of nonlinear vector fields . for our purposes ,let us consider vector fields of the following form where in the above , is a matrix having eigenvalues with zero real parts , is an matrix having eigenvalues with negative real parts , and and are functions ( ) . for the sake of notation simplicity ,let us write the center manifold in the following way : with sufficiently small .[ r:1 ] we remark that the conditions and imply that is tangent to at .the following three theorems are taken from the seminal book , as reported in .[ th:3 ] there exists a center manifold for ( [ eq:34 ] ) .the dynamics of ( [ eq:34 ] ) restricted to the center manifold is , for sufficiently small , given by the following c - dimensional vector field the next result implies that the dynamics of ( [ eq:35 ] ) near determines the dynamics of ( [ eq:34 ] ) near .[ th:4 ] i ) let the zero solution of ( [ eq:35 ] ) be stable ( asymptotically stable ) ( unstable ) ; then the zero solution of ( [ eq:34 ] ) is also stable ( asymptotically stable ) ( unstable ) .ii ) let the zero solution of ( [ eq:35 ] ) be stable .then if is a solution of ( [ eq:34 ] ) with sufficiently small , there is a solution of ( [ eq:35 ] ) such that as where is a constant .it is possible to compute the center manifold so that we can reap the benefits of theorem [ th:4 ] . using invariance of under the dynamics of ( [ eq:34 ] ), we derive a quasi - linear partial differential equation that must satisfy , in order for its graph to be a center manifold for ( [ eq:34 ] ) .this is done as follows : 1 .the coordinates of any point on must satisfy 2 . differentiating ( [ eq:36 ] ) with respect to timeimplies that the coordinates of any point on must satisfy 3 .any point on obeys the dynamics generated by ( [ eq:34 ] ) .therefore , substituting into ( [ eq:37 ] ) gives or then , to find a center manifold , all we need to do is to solve ( [ eq:40 ] ) .[ th:5 ] let be a mapping with such that as for some .then the theorem gives us a method for computing an approximate solution of ( [ eq:40 ] ) to any desired degree of accuracy .so , for this task , we will employ power series expansions ( note that by remark [ r:1 ] power series expansions start from second order ) .suppose now that ( [ eq:34 ] ) depends on a vector of parameters : where with the same assumptions as in ( [ eq:34 ] ) . following wiggins , we will handle parameterized systems including the parameter as a new dependent variable as follows this system has a fixed point at .the matrix associated with the linearization of ( [ eq:44 ] ) around this fixed point has eigenvalues with negative real part and eigenvalues with zero real part .let us now apply center manifold theory . modifying definition given in formula ( [ eq : center ] ), a center manifold will be represented as the graph of for and sufficiently small . theorem [ th:3 ] still applies , with the vector field reduced to the center manifold given by let us calculate the center manifold . using invariance of the graph of under the dynamics generated by ( [ eq:44 ] ) , we have substituting ( [ eq:44 ] ) into ( [ eq:46 ] ) results in the following quasi - linear partial differential equation that must satisfy in order for its graph to be a center manifold . +\notag \\ & -bh(x,\varepsilon)-g\left(x , h(x,\varepsilon),\varepsilon\right)=0\end{aligned}\ ] ] although center manifolds exist , they do not need to be unique . this can be seen from a well - known example due to anosov ( see , ). it can be proven ( see , among others , as reported in ) that any couple of center manifolds of a given fixed point differ by ( at most ) exponentially small terms .thus , the taylor series expansions of any two center manifolds agree to all orders .moreover , it can be shown that , due to the attractive nature of the center manifold , certain orbits ( for example , fixed points , periodic orbits , homoclinic orbits , and heteroclinic orbits ) that remain close to the origin for all the time must be on every center manifold of a given fixed point .for this subsection we will refer to the widespread book by w. wasow , and - in particular - to its relevant section on singular perturbations .a systematic study of the qualitative aspects of such singular perturbation problems can be found in a series of papers by tihonov ( , and ) .we consider differential systems of the form where is -dimensional vector and an -dimensional vector .all variables are real , and is positive .we assume that : * the functions and in ( [ eq : c1 ] ) are continuous in an open region of the -space .* there is an -dimensional vector function continuous in such that the points , for all , are in and * there exists a number , independent of , such that the relations imply the function will be referred to as a root of the equation .it is not excluded that may have other roots besides . a root that satisfiescondition c will be called _ isolated _ in .the system of differential equations in which is a parameter , will be called the boundary layer equation belonging to the system ( [ eq : c1 ] ) . to ( [ eq : c1 ] ) there corresponds the reduced ( or degenerate ) system the solutions of ( [ eq : c1 ] ) and ( [ eq : c1bis ] ) define trajectories and in the -space. we also assume : * the singular point of the boundary layer equation ( [ eq : c2 ] ) is asymptotically stable for all .the root will be called , briefly , a stable root in , if assumption ( d ) is satisfied . in accordance with our previous terminologywe refer to the problem consisting of equations ( [ eq : c1 ] ) together with the initial condition as the full problem .the reduced problem is here defined by the differential equation ( [ eq : c4 ] ) is , of course , obtained by setting in ( [ eq : c1 ] ) and determining the root of the equation .moreover , we assume : * the full problem , as well as the reduced one , has a unique solution in an interval . *the asymptotic stability of the singular point is uniform with respect to in .let .the set of points in the -space for which the inequalities hold will be called a `` -tube '' .the set constitutes the `` lateral boundary '' of the -tube .[ l:1 ] suppose assumptions ( a ) to ( f ) are satisfied .let be arbitrary but so small that the closure of the -tube lies in .there exist then two numbers and such that for the following is true : any solution of the full equation that is in the interior of the -tube for some value of , , and in the closure of the -tube for all in , does not meet the lateral surface of the -tube for .the lemma states that , for small , any solution that comes close to the curve in remains close to it , as long as . for a convenient formulation of tihonov s theorem , according to , we introduce one more term .[ def:9 ] a point , is said to lie in the domain of influence of the stable root if the solution of the problem exists and remains in for all , and if it tends to , as .[ th:6 ] let assumptions ( a ) to ( f ) be satisfied and let be a point in the domain of influence of the root .then the solution , of the full initial value problem ( [ eq : c1 ] ) , ( [ eq : c3 ] ) is linked with the solution , of the reduced problem ( [ eq : c4 ] ) , ( [ eq : c5 ] ) by the limiting relations here is any number such that is an isolated stable root of for .the convergence is uniform in , for , and in any interval for .tihonov s theorem [ th:6 ] is only the first step in the asymptotic solution of initial value problems of the singular perturbation type .the most natural approach to this problems is to attempt a solution ( _ outer solution _ ) in the form of a series in powers of : and to determine the coefficients , by means of formal substitution and comparison of coefficients .it is clear that we have to relate the series ( [ eq : c6 ] ) to the behavior of the solution of ( [ eq : c1 ] ) in the boundary layer , as shown , for example , in . for values of that are of order the solution to our perturbation problem can be found starting from the stretching transformation .hence , the stretched form of the original problem is also in this case we determine the solution of the transient phase ( _ inner solution _ ) in terms of a series in powers of .the developments of the passages is beyond the scope of this paper .for the other accounts and a more detailed discussion , see .let us consider the enzymatic reaction described in ( [ eq : a1 ] ) and ( [ eq : a3 ] ) .clearly is a fixed point of ( [ eq : a3 ] ) .following , let us first adimensionalize equations ( [ eq : a3 ] ) .let us observe that we could use different adimensionalization procedures , in particular using the parameter , as proposed in .however , we follow the simpler scheme shown in , just in order to test our theoretical results and compare them with the results shown in : where and this is the heineken - tsuchiya - aris system .carr ( , p.8 , example 3 ) uses equations to obtain ( [ eq : a9s ] ) from ( [ eq : a10carr ] ) just impose and . we will start from ( [ eq : a9s ] ) for having a more realistic system . by applying the sqssa ( which corresponds to impose ), we have the reduced system ( _ outer solution _ ) of ( [ eq : a9s ] ) as above remarked , heineken et al . and dvok and ika quote tihonov s theorem in order to justify the sqssa .the aim of this subsection is now to determine the center manifold for ( [ eq : a9s ] ) , using the techniques described in and to show that it is asymptotically equivalent to the singular points related to tihonov theory .to this aim , let us now set .equations ( [ eq : a9s ] ) can be rewritten in the equivalent form ( _ inner solution _ ) where . in order to obtain for ( [ eq : a13 ] ) a block form , of the type ( [ eq:34 ] ) , where the submatrix having eigenvalues with zero real parts is separated from the submatrix having eigenvalues with negative real parts, we operate the substitution , i.e. , . hence , and following , the way we will handle parametrized systems consists of including the parameter as a new dependent variable as in ( [ eq : a14 ] ) , which merely acts to augment the matrix by adding a new center direction that has no dynamics . in this way ,system ( [ eq : a13 ] ) becomes where the parameter is a new variable and the system could have also other fixed points . the associated linearized system has a diagonal form , where the eigenvalues are given by ( multiplicity ) and . to find a center manifold , all we need to do is to solve ( [ eq:40 ] ) for system ( [ eq : a14 ] ) , employing theorem [ th:5 ] , which gives us a method for computing an approximate solution of ( [ eq:40 ] ) to any desired degree of accuracy . referring to ( [ eq:40 ] ) and ( [ eq:34 ] ) , , ,so we search for a function such that where using theorem [ th:5 ] we assume substituting ( [ eq : a16 ] ) into ( [ eq : a15 ] ) , one has : where accordingly , substituting ( [ eq : a18 ] ) into ( [ eq : a17 ] ) , one has +\kappa \left(a_1 u^2+a_2 u \varepsilon+a_3 \varepsilon^2+\dots\right)+ \notag \\ & -u\left(u - a_1 u^2-a_2 u \varepsilon - a_3 \varepsilon^2 + \dots\right)-\varepsilon \biggl[-\frac{\lambda}{\kappa } u + \left(-a_1+\frac{1}{\kappa}+\frac{\lambda}{\kappa}a_1\right ) u^2+\notag \\ & + \left(-a_2+\frac{\lambda}{\kappa}a_2\right)u\varepsilon+ \left(-a_3+\frac{\lambda}{\kappa}a_3\right)\varepsilon^2+\dots\biggr ] = 0\end{aligned}\ ] ] truncating at second order terms , we obtain : equating terms of the same power to zero gives , and .hence , the center manifold for system ( [ eq : a14 ] ) is : which , for , gives the result shown in .finally , substituting ( [ eq : a20 ] ) into ( [ eq : a14 ] ) , we obtain the vector field reduced to the center manifold , according to equation ( [ eq:35 ] ) of theorem [ th:3 ] .in fact , if , and , formula ( [ eq : a18 ] ) becomes : thus : , \notag \\ \frac{d \varepsilon}{ds}&=0\end{aligned}\ ] ] or , in terms of the original time scale , , \notag \\ \dot\varepsilon&=0\end{aligned}\ ] ] let us now conclude showing that the center manifold obtained following this method is asymptotically sufficiently close to ( [ eq : a11 ] ) .we can obtain back the equation in .in fact , since , from ( [ eq : a20 ] ) and we have considering , one has which is the second equation of ( [ eq : a11 ] ) .we can conclude that , supposing , the center manifold determined in this way approximates the solution given by the sqssa , which coincides with the root related to the application of tihonov s theorem . in figure ( [ figura2 ] )we compare the sqssa of system ( [ eq : a9s ] ) , obtained from ( [ eq : a10carr ] ) , with the center manifold ( [ eq : aaa20 ] ) , at the zeroth order and at the first order in , respectively .obviously , the latter gives a better approximation of the numerical solution of ( [ eq : a9s ] ) , while the former well approximates the sqssa curve , which in fact can be considered the zeroth order term of an asymptotic expansion of the solution in terms of .of the numerical solution of the system ( [ eq : a9s ] ) ( blue solid line ) with its sqssa ( [ eq : a10carr ] ) ( black solid line ) and its zeroth order ( dashed line ) and first order ( dashed / dotted line ) center manifold ( [ eq : aaa20 ] ) .the parameter sets are the following . left : .the set was taken from .right : .the set was taken ( and modified ) from .since in both cases the value of is sufficiently small , the different approximations converge to the graph of the numerical solution . in the plot on the right it is possible to appreciate the different behavior of the zero - th order and the first order center manifolds .while the first order manifold approximates in a better way the numerical solution , the zero - th order converges to the sqssa , that does not approximate sufficiently well the numerical solution , since it is the zero - th order term of the singular perturbation of the solution in terms of the parameter .,title="fig:",scaledwidth=50.0% ] of the numerical solution of the system ( [ eq : a9s ] ) ( blue solid line ) with its sqssa ( [ eq : a10carr ] ) ( black solid line ) and its zeroth order ( dashed line ) and first order ( dashed / dotted line ) center manifold ( [ eq : aaa20 ] ) .the parameter sets are the following . left : .the set was taken from .right : .the set was taken ( and modified ) from . since in both casesthe value of is sufficiently small , the different approximations converge to the graph of the numerical solution . in the plot on the right it is possible to appreciate the different behavior of the zero - th order and the first order center manifolds .while the first order manifold approximates in a better way the numerical solution , the zero - th order converges to the sqssa , that does not approximate sufficiently well the numerical solution , since it is the zero - th order term of the singular perturbation of the solution in terms of the parameter .,title="fig:",scaledwidth=50.0% ] let us now consider system ( [ eq : a4 ] ) . let us adimensionalize the system , as in where and with system parameters such that . by applying the tqssa ( which corresponds to imposing ) to ( [ eq : aaa12 ] ) , we have and , solving in : which represents the singular point ( or _ outer solution _ ) of ( [ eq : aaa12 ] ) , where , , are viewed as fixed positive constants and is the parameter .its fixed point is .the new parameter appears already in and was used in to determine the asymptotic expansions whose leading term is just the tqssa . in 2008 khoo and hegland applied tihonov s theorem in order to study the tqssa , obtaining similar results as in .the aim of this subsection is now to determine the center manifold for ( [ eq : aaa12 ] ) , using the techniques described in and to show that they are asymptotically equivalent to the singular points related to tihonov theory . to this aim ,let us now set , system ( [ eq : aaa12 ] ) can be rewritten in the form ( _ inner solution _ ) in order to obtain for ( [ eq : a22 ] ) a block form , of the type ( [ eq:34 ] ) , we make the substitution , i.e. , .hence , doing so , and introducing the new variable , system ( [ eq : a22 ] ) becomes the associated linearized system has a diagonal form and , in fact , the eigenvalues are given by ( with multiplicity ) and .also in this case the eigenvalue has multiplicity .we solve ( [ eq:40 ] ) for system ( [ eq : a23 ] ) , employing theorem [ th:5 ] and determine the center manifold . referring to ( [ eq:40 ] ) and ( [ eq:34 ] ), we have that , .accordingly , we search a function such that using theorem [ th:5 ] we assume substituting ( [ eq : a24 ] ) into ( [ eq : a25 ] ) , one has : truncating at second order terms , we obtain : so , u^2+\ ] ] u \varepsilon+ ( \eta + \kappa_m)a_3 \varepsilon^2 = 0\ ] ] from which : hence , the center manifold for system ( [ eq : a23 ] ) is : finally , substituting ( [ eq : a27 ] ) into ( [ eq : a23 ] ) we obtain the vector field reduced to the center manifold , according to equation ( [ eq:35 ] ) of theorem [ th:3 ] . then : , \\\dot\varepsilon&=0 \end{cases}\ ] ] or , in terms of the original time scale , , \notag \\ \dot\varepsilon&=0\end{aligned}\ ] ]let us show that the center manifold obtained in ( [ eq : a27 ] ) is asymptotically close to the root given by ( [ eq : a29 ] ) , in terms of tihonov s theorem . from ( [ eq : a27 ] ) , and since , with have since equation ( [ eq : a29 ] ) is obtained putting , one has while , approximating the square root in ( [ eq : a29 ] ) , one has \sim \frac{\eta + \kappa_m+\sigma u}{\eta \sigma } \varepsilon = \frac{u}{\eta + \kappa_m+\sigma u } , \ \ \ \text{for } \ u\rightarrow0\end{aligned}\ ] ] it follows that both ( [ eq : a30 ] ) and ( [ eq : a31 ] ) are asymptotic to when .this means that both the expressions can be intended as two different approximations of the center manifold . in figure ( [ figura3 ] )we compare the tqssa of system ( [ eq : aaa12 ] ) , obtained from ( [ eq : a29 ] ) , with the center manifold ( [ eq : aaa30 ] ) , at the zeroth order and at the first order in , respectively .obviously , the latter gives a better approximation of the numerical solution of ( [ eq : aaa12 ] ) , while the former well approximates the tqssa curve , which in fact can be considered the zeroth order term of an asymptotic expansion of the solution in terms of .of the numerical solution of the system ( [ eq : aaa12 ] ) ( blue solid line ) with its tqssa ( [ eq : a29 ] ) ( black solid line ) and its zeroth order ( dashed line ) and first order ( dashed / dotted line ) center manifold ( [ eq : aaa30 ] ) .the parameter sets are the following . left : .the set was taken from .right : .the set was taken from . in the plot on the left , since the value of is sufficiently small , the different approximations converge to the graph of the numerical solution . in the plot on the right it is possible to appreciate the different behavior of the zero - th order and the first order center manifolds .while the first order manifold approximates in a better way the numerical solution , the zero - th order converges to the tqssa , that does not approximate sufficiently well the numerical solution , since it is the zero - th order term of the singular perturbation of the solution in terms of the parameter .,title="fig:",scaledwidth=50.0% ] of the numerical solution of the system ( [ eq : aaa12 ] ) ( blue solid line ) with its tqssa ( [ eq : a29 ] ) ( black solid line ) and its zeroth order ( dashed line ) and first order ( dashed / dotted line ) center manifold ( [ eq : aaa30 ] ) .the parameter sets are the following . left : .the set was taken from .right : .the set was taken from . in the plot on the left ,since the value of is sufficiently small , the different approximations converge to the graph of the numerical solution . in the plot on the right it is possible to appreciate the different behavior of the zero - th order and the first order center manifolds .while the first order manifold approximates in a better way the numerical solution , the zero - th order converges to the tqssa , that does not approximate sufficiently well the numerical solution , since it is the zero - th order term of the singular perturbation of the solution in terms of the parameter .,title="fig:",scaledwidth=50.0% ] let us consider now a more general system of the following form ( _ outer solution _ ) and the corresponding _ inner solution _ ( with ) where the origin is a fixed point for ( [ eq : b1 ] ) .heineken - tsuchiya - aris system ( [ eq : a13 ] ) and the system obtained by the tqssa approximation ( [ eq : a22 ] ) , are particular cases of the system ( [ eq : b1])-([eq : b2 ] ) .we are able to state a more general theorem concerning the center manifold , which is the main result of our paper . let ; hence : +b\left[au+bv+\psi\left(u , v\right)\right]\ ] ] doing so , system ( [ eq : b1 ] ) becomes , for and , the associated linearized system has a block form of type ( [ eq:34 ] ) and , in fact , the eigenvalues are given by ( with multiplicity ) and .thus in every system of the form ( [ eq : b3 ] ) we are in presence of a center manifold .we write equation ( [ eq:40 ] ) for system ( [ eq : b3 ] ) , employing theorem [ th:5 ] .referring to ( [ eq:40 ] ) and ( [ eq:34 ] ) , we have that , .accordingly , we search for a function such that from which , since is a function at least of third order in and , while we are interested in a second order expression of function , we can neglect this term and focus on the center manifold of ( [ eq : b1 ] ) and the isolated point of ( [ eq : b1 ] ) are asymptotically equivalent . * step 1 .* using theorem [ th:5 ] we assume and it is trivial to prove that satisfies ( [ eq : b4 ] ) for .moreover , from ( [ eq : b2 ] ) , +\dots\ ] ] where contains the quadratic terms in and . since the terms in and , with , are at least of third order in and , we consider only term in .therefore , ^2+\dots\ ] ] while for it is sufficient to consider the first order expansion in because , otherwise , in ( [ eq : b4 ] ) we would have third order terms for in and .thus , +\dots\ ] ] where we recall that .accordingly , equation ( [ eq : b4 ] ) becomes : \notag \\ & + \frac{b}{2 } \left[\psi_{uu}(0,0)-2\frac{a}{b } \psi_{u , v}(0,0)+\left(\frac{a}{b}\right)^2 \psi_{v , v}(0,0)\right]u^2+\dots = 0\end{aligned}\ ] ] equating to zero terms of the same power gives \notag \\\lambda_2&=-\frac{a}{b}\left[\varphi_u(0,0)-\frac{a}{b } \varphi_v(0,0)\right ] \notag \\ \lambda_3&=0\end{aligned}\ ] ] hence , the center manifold for system ( [ eq : b1 ] ) is u^2 \notag \\ &-\frac{a}{b}\left[\varphi_u(0,0)-\frac{a}{b } \varphi_v(0,0)\right ] u \varepsilon+\dots\end{aligned}\ ] ] setting in the rhs , we obtain the center manifold of ( [ eq : b1 ] ) .singular point technique * on the other hand , since , setting , we have that , equation ( [ eq : b4 ] ) becomes which gives an identity up to , if we substitute as above and if we operate a taylor expansion around .the vector field reduced to the center manifold , from equation ( [ eq:35 ] ) of theorem [ th:3 ] , is : or , in terms of the original time scale , moreover : where and all the derivatives are calculated in .hence , +w\frac{\varphi_v(0,0)}{b}+\dots\ ] ] and , since ] ) , differently from what is wrongly stated .heineken et al . and successively other authors interpreted sqssa and tqssa as leading order expansions of the solutions in terms of a suitable parameter , which has to be considered small .this interpretation allows us to embed the qssa theory in a framework which is related to tihonov s theorem , where the parameter multiplies the derivative of and the qssa can be obtained as the singular point of the original system , setting . in this paperwe have shown that , at least in the classical simple scheme ( [ eq : a3 ] ) , the approximation obtained applying tihonov s theorem is asymptotically equivalent to the center manifold of the system , which means that reduced system and center manifold are two sides of the same coin . once again, the total qssa has shown to be much more efficient and natural than the standard one , mainly thanks to the fact that the parameter used for the expansions in the total framework is always less than . in our actual researcheswe are applying the techniques shown in this paper to more complex enzyme reactions , as the fully competitive inhibition , the phosphorylation - dephosphorylation cycle ( or goldbeter - koshland switch ) , the linear double phosphorylation reaction , the double phosphorylation - dephosphorylation cycle and , more in general , futile cycles .these mechanisms were already studied in terms of tqssa in previous papers .the techniques here shown will allow to read the tqssa as the leading term of an asymptotic expansion in terms of a suitable perturbation parameter , in these more complex cases , too .the authors are deeply grateful to prof .enzo orsingher , from sapienza university ( rome , italy ) and prof .jan andres , from palacky university ( olomouc , czech republic ) for their translations of the papers and some precious clarifications concerning some passages of the papers .d. andreucci , a. m. bersani , g. dellacqua , e. bersani , c. de lazzari , m. ledda , a. lisi , g. pontrelli , a reaction - diffusion numerical model to predict cardiac tissues regeneration via stem cell therapy , mascot11 proceedings , r. m. spitaleri ( ed . ) , imacs series in computational and applied mathematics 17 ( 2013 ) , imacs , rome , pp .a.m. bersani , e. bersani , g. dellacqua , m.g .pedersen , new trends and perspectives in nonlinear intracellular dynamics : one century from michaelis menten paper , continuum mechanics and thermodynamics , 1 - 26 , ( 2014 ) .a.m. bersani , e. bersani , l. mastroeni , deterministic and stochastic models of enzymatic networks - applications to pharmaceutical research , comp ., special issue : r. tadei and n. bellomo ( editors ) , `` modeling and computational methods in genomic sciences '' , 55 , pp .879 - 888 ( 2008 ) a. m. bersani , a. borri , a. milanesi , p. vellucci , tihonov theory and center manifolds for inhibitory mechanisms in enzyme kinetics , submitted to comm .ind . math .a.m. bersani , g. dellacqua , is there anything left to say on enzyme kinetic constants and quasi - steady state approximation ?, j. math ., 50 , 335 - 344 ( 2012 ) a.m. bersani , g. dellacqua , g. tomassetti , on stationary states in the double phosphorylation - dephosphorylation cycle , aip conf .1389 , numerical analysis and applied mathematics icnaam , halkidiki ( greece ) , 19 - 25 september 2011 , pp .1208 - 1211 ( 2011 ) m.z .bodenstein , eine theorie der photochemischen reaktionsgeschwindigkeiten .85 , 329397 ( 1913 ) j. borghans , r. de boer and l. segel , extending the quasi - steady state approximation by changing variables , bull .biol . , 58 , pp . 4363 ( 1996 ) j. carr , applications of center manifold theory .springer - verlag : new york , heidelberg , berlin , ( 1981 ) .chapman , l.k .underhill , the interaction of chlorine and hydrogen . the influence of mass , j. chem ., 103 , pp . 496508 ( 1913 ) g. dellacqua and a.m. bersani , a perturbation solution of michaelis - menten kinetics in a `` total '' framework , j. math ., 50 , pp . 11361148( 2012 ) g. dellacqua , a.m. bersani , quasi - steady state approximations and multistability in the double phosphorylation - dephosphorylation cycle , communications in computer and information science 273 , pp .155 - 173 ( 2012 ) g. dellacqua , a.m. bersani , bistability and the complex depletion paradox in the double phosphorylation - dephosphorylation cycle , proceedings bioinformatics 2011 , pp .55 - 65 ( 2012 ) .s. dimitrov , s. markov , metabolic rate constants : some computational aspects , mathematics and computers in simulation , 133 , pp . 91110 ( 2017 ) j. w. dingee and a. b. anton , a new perturbation solution to the michaelis - menten problem , aiche j. , 54 , pp .13441357 ( 2008 ) i. dvok , j. ika , analysis of metabolic systems with complex slow and fast dynamics , bull .biol . , 51 , pp .255 - 274 ( 1989 ) i. giorgio , u. andreaus , d. scerrato , f. dellisola , a visco - poroelastic model of functional adaptation in bones reconstructed with bio - resorbable materials .model mechanobiol .( 2016 ) 15(5 ) : 1325 - 1343 .a. goldbeter , d.e .koshland , an amplified sensitivity arising from covalent modification in biological system , proc ., 78 , pp . 6840 - 6844( 1981 ) g. g. hammes , thermodynamics and kinetics for the biological sciences , wiley - interscience , new york ( 2000 ) .f. g. heineken , h. m. tsuchiya , r. aris , on the mathematical status of the pseudo - steady state hypothesis of biochemical kinetics , mathematical biosciences 1.1 , 95 - 113 , ( 1967 ) .khoo , m. hegland , the total quasi - steady state assumption : its justification by singular perturbation and its application to the chemical master equation , anziam j. , 50 , pp .c429c443 ( 2008 ) a. kumar , k. josic , reduced models of networks of coupled enzymatic reactions .journal of theoretical biology 278.1 , 87 - 106 , ( 2011 ) .k. j. laidler , theory of the transient phase in kinetics , with special reference to enzyme systems , can .33 , pp . 16141624( 1955 ) a. l. lehninger , principles of biochemistry , w.h . freeman & company , new york ( 2008 )lin , l.a .segel , mathematics applied to deterministic problems in the natural sciences , society for industrial and applied mathematics ( siam ) , philadelphia ( 1988 ) y. lu , t. lekszycki , a novel coupled system of non - local integro - differential equations modelling young s modulus evolution , nutrients supply and consumption during bone fracture healing . .( 2016 ) 67 : 111 .j. monod , j. wyman , j .-changeux , on the nature of allosteric transitions : a plausible model . j. mol12 , pp . 88118( 1965 ) a.h .nguyen and s.j .fraser , geometrical picture of reaction in enzyme kinetics , j. chem .91 , pp . 186193( 1989 ) b.o .palsson , e.n .lightfoot , mathematical modelling of dynamics and control in metabolic networks .i. michaelis - menten kinetics , j. theor . biol . , 111 , pp . 273 - 302 ( 1984 ) b.o .palsson , e.n .lightfoot , mathematical modelling of dynamics and control in metabolic networks .simple dimeric enzymes , j. theor ., 111 , pp . 303 321 ( 1984 ) b.o .palsson , h. palsson , e.n .lightfoot , mathematical modelling of dynamics and control in metabolic networks .iii . linear reaction sequences , j. theor . biol . , 113 , pp . 231 259 ( 1985 ) b.o .palsson , on the dynamics of the irreversible michaelis - menten reaction mechanism , chem ., 42 , pp .447 - 458 ( 1987 ) m. g. pedersen , a. m. bersani and e. bersani , the total quasi steady - state approximation for fully competitive enzyme reactions , bull .biol . , 69 , pp .433457 ( 2005 ) m. g. pedersen , a. m. bersani , e. bersani and g. cortese , the total quasi - steady state approximation for complex enzyme reactions , mathematics and computers in simulation ( matcom ) , 79 , pp . 10101019( 2008 ) m. g. pedersen , a. m. bersani and e. bersani , quasi steady - state approximations in intracellular signal transduction a word of caution , j. math ., 43 , pp . 13181344 ( 2008 ) m. g. pedersen and a. m. bersani , introducing total substrates simplifies theoretical analysis at non - negligible enzyme concentrations : pseudo first - order kinetics and the loss of zero - order ultrasensitivity , j. math .biol . , 60 , pp .267 - 283 ( 2010 ) m. schauer , r. heinrich , analysis of the quasi - steady - state approximation for one - substrate reaction , j. theor .biol . , 79 , pp .425 - 442 ( 1979 ) s. schnell , p.k .maini , enzyme kinetics far from the standard quasi - steady state and equilibrium approximation , math .comput . model ., 35 , pp .137 - 144 ( 2002 ) l. a. segel , modeling dynamic phenomena in molecular and cellular biology , cambridge univ .press , cambridge ( 1984 ) l. a. segel , on the validity of the steady state assumption of enzyme kinetics , bull . math .biol . , 50 , pp .579593 ( 1988 ) l. a. segel and m. slemrod , the quasi steady - state assumption : a case study in pertubation , siam rev ., 31 , pp . 446477 ( 1989 ) j. sijbrand , properties of center manifolds , trans .289 , 431 - 469 , ( 1985 ) .swoboda , the kinetics of enzyme action , biochim .acta , 23 , pp . 7080 ( 1957 )swoboda , the kinetics of enzyme action , ii . the terminal phase of the reaction , biochim . biophys .acta , 25 , pp .132135 ( 1957 ) a.n .tikhonov , on the dependence of the solutions of differential equations on a small parameter , matematicheskii sbornik 64.2 , 193 - 204 , ( 1948 ) .tikhonov , on a system of differential equations containing parameters ., 27 , 147156 , ( 1950 ) .tikhonov , systems of differential equations containing small parameters in the derivatives .matematicheskii sbornik 73.3 , 575 - 586 , ( 1952 ) .l. wang , e.d .sontag , on the number of steady states in a multiple futile cycle , j. math ., 57 , pp .29 - 52 ( 2008 ) l. wang , e.d .sontag , singularly perturbed monotone systems and an application to double phosphorylation cycles , j. nonlinear sci ., 18 , pp . 527550 ( 2008 ) w. wasow , asymptotic expansions for ordinary differential equations .courier corporation , ( 2002 ) .s. wiggins , normally hyperbolic invariant manifolds in dynamical systems .springer - verlag : new york , heidelberg , berlin , ( 1994 ) .s. wiggins , introduction to applied nonlinear dynamical systems and chaos .2 . springer science business media , ( 2003 ) .e. n. yeremin , the foundations of chemical kinetics , mir pub . ,moscow ( 1979 ) | in this paper we prove that the well - known quasi - steady state approximations , commonly used in enzyme kinetics , which can be interpreted as the reduced system of a differential system depending on a perturbative parameter , according to tihonov theory , are asymptotically equivalent to the center manifold of the system . this allows to give a mathematical foundation for the application of a mechanistic method to determine the center manifold of ( at this moment , still simple ) enzyme reactions . |
the sorting of the suffixes of a text plays a fundamental role in text algorithms with several applications in many areas of computer science and bioinformatics . for instance , it is a fundamental step , in implicit or explicit way , for the construction of the suffix array ( ) and the burrows - wheeler transform ( ) .the , introduced in 1990 ( cf . ) , is a sorted array of all suffixes of a string , where the suffixes are identify by using their positions in the string .several strategies that privilege the efficiency of the running time or the low memory consumption have been widely investigated ( cf . ) . the , introduced in ( cf . ) , permutes the letters of a text according to the sorting of its cyclic rotations , making the text more compressible ( cf . ) . a recent survey on the combinatorial properties that guarantee such a compressibility after the application of can be found in ( cf . also ) .moreover , in the last years the and the , besides being important tools in data compression , have found many applications well beyond its original purpose ( cf . ) .the goal of this paper is to introduce a new strategy for the sorting of the suffixes of a word that opens new scenarios of the computation of the and the .our strategy uses a well known factorization of a word called the _ lyndon factorization _ and is based on a combinatorial property proved in this paper , that allows to sort the suffixes of ( `` global suffixes '' ) by using the sorting of the suffixes inside each block of the decomposition ( `` local suffixes '' ) .the lyndon factorization is based on the fact that any word can be written uniquely as , where * the sequence is non - increasing with respect to lexicographic order ; * each is strictly less than any of its proper cyclic shift ( lyndon words ) .this factorization was introduced in and a linear time algorithm is due to duval .the intuition that the knowledge of lyndon factorization of a text can be used for the computation of the suffix array of the text itself has been introduced in .conversely , a way to find the lyndon factorization from the suffix array can be found in .if is a factor of a word we say that the sorting of the local suffixes of is _ compatible _ with the sorting of the global suffixes of if the mutual order of two local suffixes in is kept when they are extended as global suffixes .the main theorem in this paper states that if is a concatenation of consecutive lyndon factors , then the local suffixes in are compatible with the global suffixes .this suggests some new algorithmic scenarios for the constructions of the and the .in fact , by performing the lyndon factorization of a word by duval s algorithm , one does not need to get to the end of the whole word in order to start the decomposition into lyndon factors .since our result allow to start the sorting of the local suffixes ( compatible with the sorting of the global suffixes ) as soon as the first lyndon word is discovered , this may suggest an online algorithm , that do not require to read the entire word to start sorting .moreover , the independence of the sorting of the local suffixes inside the different lyndon factors of a text suggests also a possible parallel strategy to sort the global suffixes of the text itself . in section [ sec : prel ]we give the fundamental notions and results concerning combinatorics on words , the lyndon factorization , the burrows - wheeler transform and the suffix array . in section [sec : method ] we first introduce the notion of global suffix on a text and local suffix inside a factor of the text .then we prove the compatibility between the ordering of local suffixes and the ordering of global suffixes . in section [ sec : algo ] we describe an algorithm that uses the above result to incrementally construct the of a text . such a method can be also used to explicitly construct the of the text . in section [ sec : conclusion ] we discuss about some possible improvements and developments of our method , including implementations in external memory or in place constructions .finally , we compare our strategy for sorting suffixes with the method proposed in in which a lightweight computation of the of a text is performed by partitioning it into factors having the same length .let be a finite alphabet with . given a finite word , for , a _ factor _ of written as = a_i \cdots a_j ] is called a _ prefix _ , while a factor ] if and only if ] . the table obtained by lexicographically sorting all the suffixes of is depicted in figure [ fig : bwt ] .let and let be its lyndon factorization . for each factor , we denote by and the position of the first and the last character , respectively , of the factor in .let be a factor of .we denote by ] and we call it _ global suffix _ of at the position .we write instead of when there is no danger of ambiguity .let be a word and let be a factor of .we say that the sorting of suffixes of is _ compatible _ with the sorting of suffixes of if for all with , notice that in general taken an arbitrary factor of a word , the sorting of its suffixes is not compatible with the sorting of the suffixes of .consider for instance the word and its factor . then whereas .[ th : suforder ] let and let be its lyndon factorization .let .then the sorting of the suffixes of is compatible with the sorting of the suffixes of .let and be two indexes with both contained in .we just need to prove that .let ] .suppose that .then by the definition of lexicographic order .if there is nothing to prove . if , then is prefix of , so by the definition of lexicographic order now that .this means that .if there is nothing to prove . if , the index is in some lyndon factor with , then . we denote ] .so .such an hypothesis , although strong , is not restrictive because one can obtain the lyndon factorization of any word in linear time ( cf .as shown in previous section , the hypothesis that is factorized in lyndon words suggests to connect the problem to the sorting of the local suffixes of to the lexicographic sorting of the global suffixes of .our algorithm , called bwt_lynd , considers the input text ] .this means that we sum the amount to the values of the usual suffix array of .the key point of the algorithm comes from theorem [ th : suforder ] , because the construction of from requires only the insertion of the characters of in in the same mutual order as they appear in .note that the character that follows is not considered in this operation .moreover , such an operation does not modify the mutual order of the characters already lying in . for each block with ranging from to , the algorithm bwt_lynd executes the following steps : 1 .[ it : bwtsa ] compute the and .[ it : a ] compute the counter array ] the number of suffixes of the string which are lexicographically smaller than the -th suffix of .[ it : merge ] merge and in order to obtain .[ ex : merge ] let .the lyndon factorization of is , where .figure [ fig:1 ] illustrates how step [ it : merge ] of the algorithm works .note that the positions of the suffixes in ( i.e. in ) are shifted of positions .notice that in the algorithm bwt_lynd we do not actually compute the sorted list of suffixes , but we show it in figure [ fig:1 ] to ease the comprehension of the algorithm .moreover , the algorithm can be simply adapt to compute the suffix array of , so in figure [ fig:1 ] the suffix arrays are also shown . step [ it : bwtsa ] can be executed in linear time , if and are stored in internal memory ( see ) . during step [ it : a ] , the algorithm uses the functions and described as follows . for any character ,let denote the number of characters in that are smaller than , and let denote the number of occurrences of in ] which stores in ] because \ ] has the same rank of between the suffixes of and it is preceded by the same symbol in .consequently , in our algorithm considers the suffixes \ ] and the suffix ( of the string ) as the same suffix .it is easy to prove that the value ] be the number of suffixes of lexicographically smaller than ] .then , =c(\lbwt(l_1 \cdots l_{i-1}\),c , a[j+1]).\ ] ] since is the first symbol of the suffix ] .all the suffixes of starting with a symbol smaller than are lexicographically smaller than ] .this is equivalent to counting how many s occur in ] .it is easy to verify that we can obtain the array by using the array and the suffix array , i.e. = a[sa(l_i\ ] .note that the array contains the partial sums of the values of the array used in .however , we could directly compute the array by using the notion of inverse suffix array of a word is the inverse permutation of , i.e. , = i ] .the value ] values from followed by the value $ ] .it is easy to see that the time complexity of step [ it : merge ] is , too . from the description of the algorithm and by proceeding by induction, one can prove the following proposition . at the end of the iteration , algorithm bwt_lynd correctly computes .each iteration runs in time .the overall time complexity is , where .the goal of this paper is to propose a new strategy to compute the and the of a text by decomposing it into lyndon factors and by using the compatibility relation between the sorting of its local and global suffixes . at the moment ,the quadratic cost of the algorithm could make it impractical .however , from one hand , in order to improve our algorithm , efficient dynamic data structure for the rank operations and for the insertion operations could be used .navarro and nekrich s recent result on optimal representations of dynamic sequences shows that one can insert symbols at arbitrary positions and compute the rank function in the optimal time within essentially bits of space , for a sequence of length . on the other hand , our technique ,differently from other approaches in which partitions of the text are performed , is quite versatile so that it easily can be adapted to different implementative scenarios .for instance , in the authors describe an algorithm , called bwte , that logically partitions the input text of length into blocks of the same length , i.e. and computes incrementally the of via iterations , one per block of .text blocks are examined from right to left so that at iteration , they compute and store on disk given . in this casethe mutual order of the suffixes in each block depends on the order of the suffixes of the next block .our algorithm bwt_lynd builds the of a text or its by scanning the text _ from left to right _ and it could run online , i.e. while the lyndon factorization is realized. one of the advantages is that adding new text to the end does not imply to compute again the mutual order of the suffixes of the text analyzed before , unless for the suffixes of the last lyndon word that could change by adding characters on the right .moreover , as described in the previous section , the text could be partitioned into several sequences of consecutive blocks of lyndon words , and the algorithm can be applied _ in parallel _ to each of those sequences . furthermore , also the lyndon factorization can be performed in parallel , as shown in . alternatively , since we read each symbol only once , also an in - place computation could be suggested by the strategy proposed in , in which the space occupied by text is used to store the . finally , in the description of the algorithm we did not mention the used workspace . in fact , it could depend on the time - space trade - off that one should reach .for instance , the methodologies used in where disk data access are executed only via sequential scans could be adapted in order to obtain a lightweight version of the algorithm .an external memory algorithm for the lyndon factorization can be found in .we remark that that the method proposed in could be integrated into bwt_lynd in the sense that one can apply bwte to compute at each iteration the and the of each block of the lyndon partition . in conclusion, our method seems lay out the path towards a new approach to the problem of sorting the suffixes of a text in which partitioning the text by using its combinatorial properties allows it to tackle the problem in local portions of the text in order to extend efficiently solutions to a global dimension . | the process of sorting the suffixes of a text plays a fundamental role in text algorithms . they are used for instance in the constructions of the burrows - wheeler transform and the suffix array , widely used in several fields of computer science . for this reason , several recent researches have been devoted to finding new strategies to obtain effective methods for such a sorting . in this paper we introduce a new methodology in which an important role is played by the lyndon factorization , so that the local suffixes inside factors detected by this factorization keep their mutual order when extended to the suffixes of the whole word . this property suggests a versatile technique that easily can be adapted to different implementative scenarios . ( # 1)bwt(#1 ) # 1_#1 ( ) # 1_#1 ( ) # 1_#1(w ) # 1_#1(w ) # 1(#1 ) sorting suffixes , bwt , suffix array , lyndon words , lyndon factorization |
we address the issue of front propagation in a reaction - transport equation of kinetic type , here , the density describes a population of individuals in a continuum setting , and is the macroscopic density .the subset is the set of all possible velocities .individuals move following a velocity - jump process : they run with speed , and change velocity at rate 1 .they instantaneously choose a new velocity following the probability distribution . unless otherwise stated , we assume in this paper that is symmetric and satisfies the following properties : , and in addition , individuals are able to reproduce , with rate .new individuals start with a random velocity chosen with the same probability distribution .we could have chosen a different distribution without changing the main results , but we do not for the sake of clarity of the presentation .finally , we include a quadratic saturation term , which accounts for local competition between individuals , regardless of their speed .the main motivation for this work comes from the study of pulse waves in bacterial colonies of _ escherichia coli _ .kinetic models have been proposed to describe the run - and - tumble motion of individual bacteria at the mesoscopic scale .several works have been dedicated to derive macroscopic equations from those kinetic models in the diffusion limit .recently it has been shown that for some set of experiments , the diffusion approximation is not valid , so one has to stick to the kinetic equation at the mesoscopic scale to carefully compare with data .there is one major difference between this motivation and model .pulse waves in bacterial colonies of _ e. coli _ are mainly driven by chemotaxis which create macroscopic fluxes .growth of the population can be merely ignored in such models . in model however , growth and dispersion are the main reasons for front propagation , and there is no macroscopic flux due to the velocity - jump process since the distribution satisfies . for the sake of applications, we also refer to the growth and branching of the plant pathogen _ phytophthora _ by mean of a reaction - transport equation similar to .there is a strong link between and the classical fisher - kpp equation . in case of a suitable balance between scattering and growth ( more scattering than growth ), we can perform the parabolic rescaling in , the diffusion limit yields , where is solution to the fisher - kpp equation ( see for example ) , we recall that for nonincreasing initial data decaying sufficiently fast at , the solution of ( [ kpp ] ) behaves asymptotically as a travelling front moving at the minimal speed . in addition , this front is stable in some weighted space . therefore it is natural to address the same questions for .we give below the definition of a travelling wave for equation .[ def : deftw ] we say that a function is a smooth travelling front solution of speed of equation if it can be written , where the profile satisfies in fact the profile is a solution to the stationary equation in a moving frame , the existence of travelling waves in reaction - transport equations has been adressed by schwetlick for a similar class of equations . first , the set is bounded and is the uniform distribution over .second , the nonlinearity can be chosen more generally ( either monostable as here , or bistable ) , but it depends only on the macroscopic density ( * ? ? ?( 4 ) ) . for the monostable case ,using a quite general method he was able to prove existence of travelling waves of speed for any , a result very similar to the fisher - kpp equation .we emphasize , that although the equations differ between schwetlick s work and ours , they coincide as far as the linearization in the regime of low density is concerned . on the contrary to schwetlick , we do not consider a general nonlinearity and we restrict to the logistic case , but we consider general velocity kernels .more recently , the rescaled equation ( [ eq : kinkpp2 ] ) has been investigated by cuesta , hittmeir and schmeiser in the parabolic regime . using a micro - macro decomposition ,they construct possibly oscillatory travelling waves of speed for small enough ( depending on ) .in addition , when the set of admissible speeds is bounded , and they prove that the travelling wave constructed in this way is indeed nonnegative .lastly , when is the measure for some , equation ( [ eq : kinkpp ] ) is analogous to the reaction - telegraph equation for the macroscopic density ( up to a slight change in the nonlinearity however ) .this equation has been the subject of a large number of studies in the applied mathematics community .recently , the authors prove the existence of a minimal speed such that travelling waves exist for all speed .moreover these waves are stable in some weighted space , with a weight which differs from the classical exponential weight arising for the fisher - kpp equation .as the reaction - telegraph equation involves both parabolic and hyperbolic contributions , the smoothness of the wave depends on the balance between these contributions .in fact there is a transition between a parabolic ( smooth waves ) and a hyperbolic regime ( discontinuous waves ) , see remark [ rem : two velocities ] below .the authors also prove the existence of supersonic waves , having speed ( see remark [ rem : supersonic ] ) .the aim of the present paper is to investigate the existence and stability of travelling waves for equation ( [ eq : kinkpp ] ) for arbitrary kernels satisfying ( [ eq : hypm ] ) .for the existence part , we shall use the method of sub- and supersolutions , which do not rely on a perturbation argument .the stability part relies on the derivation of a suitable weight from which we can build a lyapunov functional for the linearized version of .the crucial assumption for the existence of travelling waves is the boundedness of .we prove in fact that in the case there exists no ( positive ) travelling wave .we then investigate the spreading rate for some particular choices of ( gaussian distribution , cauchy s distribution ) .unfortunately we are only able to give partial answer to this last question . in the last stage of writting of this paper, we realized that this issue was already addressed by mndez , campos and gmez - portillo for a slightly different equation admitting the same linearization near the front edge .our results are in agreement with their predictions .[ existence - tw ] assume that the set is compact , and that satisfies .let .there exists a speed such that there exists a travelling wave solution of of speed for all .the travelling wave is nonincreasing with respect to the space variable : .moreover , if then there exists no travelling wave of speed .the minimal speed is given through the following implicit dispersion relation : for each there is a unique such that then we have the formula [ rem : two velocities ] in the special case of two possible velocities only , corresponding to , two regimes have to be distinguished , namely and . in the case the travelling wave with minimal speed vanishes on a half - line .there , the speed of the wave is not characterized by the linearized problem for .note that this case is not contained in the statement of theorem [ existence - tw ] since it is assumed that .this makes a clear difference between the case of integrable and the case of a measure with atoms .[ rem : supersonic ] we expect that travelling waves exist for any , although this seems to contradict the finite speed of propagation when . in fact_ supersonic _ waves corresponding to should be driven by growth mainly , as it is the case in a simplified model with only two speeds . a simple argument to support the existence of such waves consists in eliminating the transport part , and seeking waves driven by growth only , .integrating with respect to yields a logistic equation for , , which as a solution connecting and for any positive .however these waves are purely artificial and we do not address this issue further .we now define and investigate the dependence of the minimal speed with respect to the velocity kernel . in the following proposition ,we give some general bounds on the minimal speed . [ prop : bound on c * ] assume that is symmetric and that for all .the minimal speed satisfies the following properties , 1 ._ [ scaling ] _ for , define , then 2 ._ [ rearrangement ]_ denote the schwarz decreasing rearrangement of the function ( see for a definition of this notion ) and the schwarz increasing rearrangement of the density distribution , then 3 . _ [ comparison ] _ if then on the other hand , if then 4 ._ [ diffusion limit ] _ in the diffusion limit we recover the kpp speed of the wave , in the case of a bounded set of velocities , we prove that for suitable initial data , the front spreads asymptotically with speed , in a weak sense .[ prop : spreadingbounded ] assume that is bounded and that .let such that for all .let be the solution of the cauchy problem. then 1 .if there exists such that for all and , then for all , 2 . if there exists and such that for all and , then for all , where is the minimal speed of existence of travelling waves given by theorem [ existence - tw ]we also establish linear and nonlinear stability in suitable weighted spaces .the keypoint is to derive a correct weight which enables to build a lyapunov functional for the linear problem .we construct a semi - explicit weight , but we believe it is not the optimal one in some sense ( see remark [ rk : phi not optimal ] ) .let be a travelling wave of speed , and let the perturbation of in the moving frame . neglecting the nonlinear contributions ,we are led to investigate the linear equation [ linstability ] there exists a weight such that the travelling front of speed is linearly stable in the weighted space .more precisely , the following lyapunov identity holds true for any solution of the linear equation , the weight is explicitly given in definition [ eq : def weight phi ] .using a comparison argument , in the spirit of , together with the explicit formula of the dissipation for the linearized system , we prove a nonlinear stability result . [ nonlinstability ] let ] .however this would require to redefine the weight , since we believe it is not the optimal one .boundedness of is a crucial hypothesis in order to build the travelling waves .we believe that it is a necessary and sufficient condition .we make a first step to support this conjecture by investigating the case .we first prove infinite speed of spreading of the front under the natural assumption . as a corollary there can not exist travelling wave in the sense of definition [ def : deftw ] .note that there exist travelling waves with less restrictive conditions than definition [ def : deftw ] , at least in the diffusive regime .these fronts are expected to oscillate as .we expect that such oscillating fronts do exist far from the diffusive regime . in the casewhere and is gaussian , we plotted the dispersion relation in the complex plane , for an arbitrary given .we observed that it selects two complex conjugate roots , supporting the fact that oscillating fronts should exist ( results not shown ) .[ prop : spreadingunbounded ] assume that for all .let such that for all and there exists and such that for all and .let be the solution of the cauchy problem .then for all , we can immediately deduce from this result the non - existence of travelling waves when , by taking such a travelling wave as an initial datum in order to reach a contradiction .assume that for all .then equation does not admit any travelling wave solution .next we investigate specific choices for the distribution , both numerically and theoretically . in the case of a gaussian distribution, we expect a spreading rate following the power law ( see also ) . to support this guess , we prove in fact that spreading occurs _ at most _ with this rate . for this purposewe build a supersolution which is spreading with this rate .this issue has been addressed by mndez , campos and gmez - portillo in a physical paper for a slightly different equation , where the nonlinearity lies in the diffusion kernel instead of the growth rate .they conjectured that , as for the kpp equation , the front speed is determined through the linearization of the equation near the unstable steady state .we believe that the linearization should give the power law of the propagation , but it is not clear to us that it will give the exact location of the transition . however , it turns out that the linearized equations in and in the present paper are the same .then , performing some fourier - laplace transform of the solution , mendez et al derived heuristically the power law of the propagation .in particular , for a gaussian kernel , they found out that the spreading rate is .[ thm : unbounded ] let for all .let such that for all .assume that there exists such that let be the solution of the cauchy problem .then for all , one has in the case of a cauchy distribution , we obtain faster spreading rate under similar assumptions ( see proposition [ supersolunbounded - cauchy ] ) , namely for all , one has the phenomenon of accelerating fronts have raised a lot of attention in the literature of reaction - diffusion equations .this phenomenon occurs for the fisher - kpp equation when the initial datum decays more slowly than any exponential ; for a variant of the fisher - kpp equation where the diffusion operator is replaced by a nonlocal dispersal operator with fat tails , or by a nonlocal fractional diffusion operator .recently , accelerating fronts have been conjectured to occur in a reaction - diffusion - mutation model which extends the fisher - kpp equation to a population with heterogeneous diffusion coefficient .there is some subtlety hidden behind this phenomenon of infinite speed of spreading .in fact the diffusion limit of the scattering equation ( namely ) towards the heat equation makes no difference between bounded or unbounded velocity sets ( see and the references therein ) .however very low densities behave quite differently , which can be measured in the setting of large deviations or wkb limit .this can be observed even in the case of a bounded velocity set . in the large deviation ( wkb ) limit of the scattering equationis performed .it differs largely from the classical eikonal equation obtained from the heat equation .the case of unbounded velocities is even more complicated . to conclude ,let us emphasize that low densities are the one that drive the front here ( pulled front ) .so the diffusion limit is irrelevant in the case of unbounded velocities , since very low density of particles having very large speed makes a big difference .we first recall some useful results concerning the cauchy problem associated with : well - posedness and a strong maximum principle .these statements extend some results given in .they do not rely on the boundedness of .[ global existence : theorem 4 in ] [ cauchypbm ] let a measurable function such that for all . then the cauchy problem has a unique solution in the sense of distributions , satisfying the next result refines the comparison principle of in order to extend it to sub and supersolutions in the sense of distributions and to state a strong maximum principle .its proof is given in appendix .[ prop - mp ] assume that are respectively a super- and a subsolution of , _i.e. _ in the sense of distributions .assume in addition that satisfies for all .then for all .assume in addition that is an interval , and that .if there exists such that , then one has for all such that . if , then this statement reads as in the parabolic framework : if and at , then for all . in the case ] .fatou s lemma yields .hence , there exists a solution ] such that . assuming that ] and a bounded domain \times v ] , since is bounded .therefore , solving the relation dispersion for the minimal speed boils down to solving we deduce therefore the minimal speed verifies .in this subsection , we focus on the linearized problem around some travelling wave moving at speed .we recall that we consider a solution of the equation associated with the linearization around a travelling wave : where the notation always stands in the sequel for a function of the variable .we shall prove stability of the wave in a suitable framework , inspired by .we search for an ansatz .the function satisfies : from , we shall derive the dissipation inequality stated in proposition .we test against to obtain the kinetic energy estimate : we aim at choosing a weight such that the dissipation is coercive in norm .let define the symmetric kernel as follows we seek a function such that for a suitable positive function , in the sense of kernel operators . for this purposewe focus on the eigenvalues of the kernel operator .[ eq : def weight phi ] we introduce the smallest solution of the following dispersion relation and we define through the differential equation finally we define recall that , so that the weight is well - defined as soon as is well - defined .a small argumentation is required to prove that is well - defined too . for a given and , define function is continuous , and satisfies the following properties , \,,\\ & g(\lambda ) = \int_{v } \frac { ( 1+r )m(v ) - r f ( z , v ) } { 1 + \lambda ( c - v ) } dv = 1 - \int_{v } \frac{r f ( z , v ) } { 1 + \lambda ( c - v ) } dv \leq 1\,,\end{aligned}\ ] ] where is chosen such that .thus we can define the smallest ] and we renormalized the distribution . for any we observed the asymptotic regime of a travelling front with finite speed , as expected . however , as , we observed no stabilization of the asymptotic spreading speed .in fact , we observed that the envelope of the spreading speed scales approximately as .hence the front is accelerating with the approximate power law .we expect that the scaling strongly depends on the decay of as .[ fig : vnonborne ]we assume in this section that and that for all .we prove superlinear spreading .we deduce as a corollary that there can not exist a travelling wave solution of .we also give some quantitative features about the spreading of the density in two cases : the case where is a gaussian , and the case where it is a cauchy s law .we construct explicit supersolutions from which we estimate the spreading from above .we expect those estimates to be sharp .in fact they are in accordance with numerical simulations .as we are not able to construct suitable subsolutions , we leave it as an open problem to prove the exact spreading rate , at least for these two specific cases ( gaussian and cauchy s law ) . before we go to the proof ,let us give some heuristics concerning the superlinear spreading rate .reaction - diffusion fronts with kpp nonlinearity are _ pulled fronts _ : the spreading rate is determined by the dynamics of small populations at the far edge of the front . in the kinetic model with unbounded velocities ,individuals with arbitrary large speeds go at the far edge of the front .no matter their low density , they yield exponential growth of the population and pull the accelerating front .of course we expect the acceleration to depend on the specific tail of the distribution .actually we conjecture that the spreading rates for a gaussian and a cauchy s law scales respectively as , and as .last , let us emphasize that the diffusive limit of leads to the classical fisher - kpp equation , under the assumption that has some finite moments which is obviously the case for a gaussian .the fisher - kpp equation exhibits linear spreading whereas the kinetic equation may exhibit superlinear spreading .the point is that small populations does not show the same scaling at the far edge of the front . to summarizewe shall say that asymptotics of large deviations are very much different in the two cases : reaction - diffusion as opposed to kinetic transport - reaction .let so that .for all we define the renormalized truncated kernel and the associated growth rate , }(v)}{\int_{-a}^a m } m(v ) \quad \hbox { and } \quad r_a = ( 1+r)\int_{-a}^a m(v)dv -1 \in ( 0 , r)\ , .\ ] ] as is compactly supported , we can apply the results proved when is bounded in order to construct appropriate subsolutions .before we proceed with subsolutions we investigate the dispersion relation in the limit .define for all and : and the corresponding minimal speed defined in lemma [ minlambda ] .[ infspeed ] one has .for all , let such that if does not diverge to as , then it is bounded along a sequence and one has simply by comparison .applying fatou s lemma to , one gets a contradiction .let the solution of ,\smallskip\\ g_a ( 0,x , v ) = g^0 ( x , v ) \quad& \hbox{in } { \mathbb{r}}\times [ -a , a ] , \end{array}\right.\ ] ] and . clearly , for all .hence , multiplying by , we get extending by outside of ] , one gets for all ] , in fact one has ^{\frac12 } } \exp \left ( - \frac{1}{2 \sigma^2 } \frac{x^2}{(s+a)^{2 } + ( t - s)^2 } \right)\\ & \leq \displaystyle \frac{1}{\sqrt{2\pi } \sigma}\exp \left ( - \frac{1}{2 \sigma^2 } \frac{x^2}{(t+a)^{2 } } \right),\end{aligned}\ ] ] since , \quad ( t+a)^2 \geq ( s+a)^2 + ( t - s)^2 \geq ( s+a)^2.\ ] ] this yields to estimate , we plug in the formula for , we compute the last integral using the formula for the initial condition , thus , for all : \right ) \exp \left ( -(1+r ) t \right ) \leq\exp \left ( -(1+r ) t \right)\ ] ] as long as and , that is .this concludes the proof .let . for all , we define the zone . from the definition of , we deduce that is a supersolution such that .however , for all and , we have it yields that computations are made easier above since the gaussian distribution is stable by convolution .this is also the case for the cauchy distribution .therefore we are able to derive an inequality similar to .let us comment that specific case before we give the spreading estimate .because the distribution has an infinite variance , we learn from that the correct macroscopic limit leads to a nonlocal fractional laplacian operator . on the other hand, we expect from an exponentially fast propagation in the diffusion regime .similarly as for our previous results , we can reasonably expect that the spreading rate is faster in the kinetic model than in the macroscopic limit .therefore we expect a spreading rate faster than exponential .in fact the supersolution that we are able to derive confirms this expectation . [ supersolunbounded - cauchy ]let and . for and $ ] , define let be defined by then is a supersolution of , that is : the proof is the same as for proposition [ supersolunbounded ] .we just show the main computations in the case of the cauchy distribution .to prove we use the residue method as follows , the analog computation for proving goes as follows .first we have thanks to the expression of the initial condition , we compute the latest integral : thus , which holds true if and .we give in this section the proofs of propositions [ cauchypbm ] and [ prop - mp ] .well - posedness relies on a fixed point argument which is also used for the comparison principle .we first state two lemmas .[ lem : eulin ] let and .then there exists a unique function such that in the sense of distributions .this solution also satisfy the duhamel formula : moreover , if and , then in . for define the operator where take and define and .assume that over .for all , one has : hence , there exists such that for all , is a contraction over . if on , then such an estimate can be derived similarly .hence , admits a unique fixed point , which satisfies over .this gives the local existence and uniqueness of the solution of .moreover , as does not depend on the initial datum , the global existence follows . [ lem : posa_t ] assume that is everywhere positive and that is an interval . then if is nonnegative and if there exists such that , letting the unique solution of , one has for all such that .first , assume by contradiction that there exists such that , with .then integrating over , one gets hence , for all and . letting , one gets for all . as and is an interval, one can take such that , leading to .this is a contradiction since , as is continuous , nonegative and , one has .hence for all such that .define . as in the proof of lemma 6 in ,we first remark that this function satisfies with for all .we define and .writing the integral formulation as in the proof of lemma [ lem : eulin ] gives and thus in for some operator which is monotone and contractive when is small enough .it follows that for all . since is contractive the sequence converges to .we conclude that , meaning that .next , assume that , is an interval , and that there exists such that . we can follow the proof of lemma [ lem : posa_t ] , where defined above is positive everywhere .we deduce that as soon as .aronson , h.f .weinberger , _ nonlinear diffusion in population genetics , combustion , and nerve pulse propagation_. partial differential equations and related topics .lecture notes in math .446 , springer , berlin , 1975 .e. bouin , v. calvez , n. meunier , s. mirrahimi , b. perthame , g. raoul , and r. voituriez ._ invasion fronts with variable motility : phenotype selection , spatial sorting and wave acceleration_. c. r. math .paris , 350(15 - 16):761766 , 2012 .dunbar , h.g .othmer , _ on a nonlinear hyperbolic equation describing transmission lines , cell movement , and branching random walks_. nonlinear oscillations in biology and chemistry .lecture notes in biomath .66 , springer , berlin , 1986 .kolmogorov , i.g .petrovsky , n.s .piskunov , _ etude de lquation de la diffusion avec croissance de la quantit de matire et son application un problme biologique _ , moskow univ .math . bull .* 1 * ( 1937 ) , 125 . | in this paper , we study the existence and stability of travelling wave solutions of a kinetic reaction - transport equation . the model describes particles moving according to a velocity - jump process , and proliferating thanks to a reaction term of monostable type . the boundedness of the velocity set appears to be a necessary and sufficient condition for the existence of positive travelling waves . the minimal speed of propagation of waves is obtained from an explicit dispersion relation . we construct the waves using a technique of sub- and super- solutions and prove their stability in a weighted space . in case of an unbounded velocity set , we prove a superlinear spreading and give partial results concerning the rate of spreading associated to particular initial data . it appears that the rate of spreading depends strongly on the decay at infinity of the stationary maxwellian . * key - words : * kinetic equations , travelling waves , dispersion relation , superlinear spreading . |
the main feature of a lvy - type density distribution is the slow , power - law , decay of its tail .more precisely , for large , with .note that the second moment of diverges for all and if also the first moment diverges .this kind of distributions are also known as -stable distributions .random processes characterized by probability densities with a long tail ( lvy - type processes ) have been found and studied in very different phenomena and fields such as biology , economy , and physics . among many of recently studied systems showing lvy - type processes we can mention : animal foraging , human mobility , earthquake statistics , mosquitoes flight under epidemiological modeling , and light transmission through a disordered glass .see also for a compilation of systems displaying lvy flights . in particular, to help us to introduce later the main model system of this study , i.e. the _ lvy map _ , we want to describe in some detail a simple dynamical model characterized by lvy processes : the _ ripple billiard_. the ripple billiard , see for example chapter 6 of , consists of two walls : one flat at and a rippled one given by the function ; here is the average width of the billiard and the ripple amplitude , see fig .[ fig1 ] . an attractive feature of the ripple billiard is that its classical phase space undergoes the generic transition to global chaos as the amplitude of the cosine function increases .then , results from the analysis of this system are applicable to a large class of systems , namely non - degenerate , non - integrable hamiltonians .moreover , the dynamics of classical particles inside the ripple billiard can be well approximated by a two - dimensional ( 2d ) poincar map between successive collisions with the rippled boundary , where is the angle the particle s trajectory makes with the -axis just before the bounce with the rippled boundary at .map can be easily derived and , after the assumptions and , it gets the simple form as an example , in fig .[ fig2](a ) we plot the poincar map for and .it is clear from this plot that this combination of geometrical parameters produces ergodic dynamics ( also known as global chaos ) .notice that in this figure we have plotted the variable modulus , as usual for this kind of 2d maps ; with this , we can globally visualize the map dynamics in a single plot but we may loose important information ., see eq .( [ m]).,width=264 ] among the dynamical information which is lost when applying to a map such as , we can mention the length of paths between successive map iterations ,i.e. the length between two successive collisions with the rippled boundary of the billiard .in fact , in fig .[ fig2](c ) we present for the same parameters used to construct fig .[ fig2](a ) . from this figurewe can clearly see that : ( i ) even though most of the paths produced by map are short ( i.e. is highly peaked at ) , there is a non - negligible probability for very large values of to occur : notice that the values of at the edges of fig .[ fig2](c ) mean that a particle has traveled about 160 periods of the rippled billiard between two successive collisions with the rippled boundary and ; ( ii ) decays as a power - law similar to eq .( [ levy ] ) .these two facts are explicit evidences of lvy flights in the dynamics of map .thus , the following question becomes pertinent : can we provide an analytic expression for the shape of given the simple form of map ?fortunately , the answer is positive as we will show it below .if we consider the dynamics of map to be in the regime of full chaos then a single trajectory can explore the full available phase space homogeneously , as shown in fig .[ fig2](a ) , so is constant and equal to , as verified in fig .[ fig2](b ) . also , from the second equation in map we obtain .thus , using , we can write which is in fact a lvy - type probability distribution function with ; compare with eq .( [ levy ] ) .then , in fig . [ fig2](c )we plot eq .( [ levym ] ) ( as the red full line ) together with the numerically obtained and observe a very good correspondence making clear the existence of lvy processes , characterized by the power - law decay , in the dynamics of the rippled billiard .in fact , the origin of the lvy - type probability distribution of eq .( [ levym ] ) for the lengths in the ripple billiard is the existence of lvy flights .since a typical chaotic trajectory fills the available phase space uniformly , see fig .[ fig2](a ) , then all angles are equally probable ; however , different angles produce quite different lengths .for example , an angle very close to corresponds to a very short length , see fig .[ fig1 ] . while tending to zero or produce trajectories which are nearly parallel to the -axis that may travel very long distances between successive collisions with the ripple boundary : in such case . these _ grazing _ trajectories are indeed lvy flights , known to produce heavy - tailed distribution functions ; for the ripple billiard the lvy flights produce eq .( [ levym ] ) as derived above .moreover , grazing trajectories in the ripple billiard have been found to play a prominent role when defining the classical analogs of the quantum _ structure of eigenstates _ and _ local density of states _ .equation ( [ levym ] ) is already an interesting result on the dynamics of the rippled billiard ( and of general chaotic extended billiards with infinite horizon ) that deserves additional attention , however our goal here is different .since now we know that map produces lvy flights characterized by we ask ourselves : can we propose a general 2d non - linear map where can be included as a parameter ? more generally , can we construct the map able to produce lvy flights characterized by ?indeed , in the following section we elaborate on these questions .we introduce the _ lvy map _ by following the opposite procedure we used above to obtain the distribution function of eq .( [ levym ] ) from map .let us * consider the 2d map to have the same iteration relation for the angle as map , see eq . ( [ m ] ) ,* assume the map to be in a regime of global chaos , such that and * demand the variable from map to be characterized by the lvy - type density distribution function where and is a normalization constant .then , provides .therefore , we define the _ lvy map _ as where , , and are the map parameters . we have introduced the absolute value in the second equation of to avoid fractional powers of negative angles .this , in turn , makes all _ lengths _ to be positive . notice that for and , where , we recover map from ( with ) .we also note that has a similar form than the maps studied in refs . in the sense that the function , in the second line of the map , is inverse proportional to to a non - integer power . below we will focus our attention on map with the parameter into the interval because our motivation is to construct a map able to produce pseudo - random variables distributed according to -stable distributions. however , the parameter may also take values outside this interval .in general , depending on the values of the parameters , the dynamics of the lvy map may be integrable , mixed ( where the phase space contains periodic islands surrounded by chaotic seas which may be limited by invariant spanning curves ) , or ergodic .that is , the classical phase space of map develops the generic transition to global chaos ( not shown here ) .however , for eq .( [ pofteta ] ) to be valid the map dynamics must be ergodic . therefore , below , by applying chirikov s overlapping resonance criteria we shall identify the onset of global chaos as a function of the parameters of the lvy map . following we linearize around the period - one fixed points , which are defined through this condition provides then , for an angle close to we can write getting thus , in addition , for we have \nonumber \\ & = & x_n + 2\pi m [ 1-(\gamma/2\pi m)^{-\alpha}\delta\theta_{n+1 } ] \nonumber \\ & = & x_n - \gamma^{-\alpha } ( 2\pi m)^{\alpha+1}\delta\theta_{n+1 } \ . \label{xn1}\end{aligned}\ ] ] finally , by substituting the new angle in ( [ deltateta ] ) and ( [ xn1 ] ) we can write the linearized map where and , respectively , play the role of action - angle variables in the _ standard map _ with being the stochasticity parameter .chirikov s overlapping resonance criteria predicts the transition to global chaos for , where .global chaos means that chaotic regions are interconnected over the whole phase space ( stability islands may still exist but are sufficiently small that the chaotic sea extends throughout the vast majority of phase space ) .this criteria for the lvy map reads as in fact , to get eq .( [ caosg ] ) from eq .( [ k ] ) we have applied the resonance criteria to the period - one fixed point corresponding to , see eq .( [ fp ] ) , which is the fixed point having the largest ( i.e. it is located highest in phase space ) and the one closer to the last invariant spanning curve bounding the diffusion of the variable . indeed , we have verified that the phase space of is ergodic if condition ( [ caosg ] ) is satisfied ( not shown here ) . moreover , in figs .[ fig4](a - d ) we plot the phase distribution functions for the lvy map with corresponding to , 1/2 , 1 , and 3/2 . from these figures , it is clear that is certainly a constant distribution . in particular , note that with condition ( [ caosg ] ) is satisfied for any , so we shall use this set of parameter values in all figures below . and( e - h ) length distributions for the lvy map with .( a , e ) , ( b , f ) , ( c , g ) , and ( d , h ) . to construct each histograma single initial condition was iterated times .the red full curve in ( e - h ) is .,width=302 ] now we would like to verify that once , must produce lengths distributed according to eq .( [ levy2 ] ) .however , we noticed that for the lvy map produces huge values of . to show this , in fig .[ fig5 ] we plot the typical value of , as a function of ; where we can observe that for the typical is larger than ( in fact , from the data we used to construct the of fig .[ fig4](a ) we obtained several lengths of the order of ! ) .thus , it is not practical to construct to test the validity of eq .( [ levy2 ] ) itself . instead, we make the change of variable which leads to then in figs .[ fig4](e - h ) we show length distribution functions for the lvy map with , 1/2 , 1 , and 3/2 ( histograms ) .as clearly seen , the agreement between the histograms and ( shown as red thick lines ) is indeed excellent .it is relevant to stress that since the phase space of the lvy map is ergodic when condition ( [ caosg ] ) is satisfied , the sequences can then be considered as lvy - distributed pseudo - random numbers . in fact , in the next section we will show through a specific application that the lengths can be used in practice as random numbers . , , for the lvy map as a function of . were used .the average was performed over values of obtained by iterating from the single initial condition .,width=264 ]there is a good deal of work devoted to the use of non - linear maps as pseudo - random number generators , see some examples in refs .therefore , in a similar way , we would like to use the lvy map to generate pseudo - random numbers particularly distributed according to the lvy - type probability distribution function of eq .( [ levy2 ] ) .however , instead of analyzing the randomness of the sequences produced by , here we will show that these numbers can be successfully used already in a specific application : we shall compute transmission through one - dimensional ( 1d ) disordered wires .recently , the electron transport through 1d quantum wires with lvy - type disorder was studied in refs .there , it was found that the average ( over different disorder realizations ) of the logarithm of the dimensionless conductance behaves as where is a _ length _ that depends on the wire model .for example , for a wire represented as a sequence of potential barriers with random lengths , ; where is the length of the barrier in the wave propagation direction .while for a wire represented by the 1d anderson model with off - diagonal disorder , ; where is the hopping integral between the sites and . here , we use the 1d anderson model with off - diagonal disorder to represent 1d quantum wires ( see details in the appendix ) where the hopping integrals are , in fact , the pseudo - random lengths generated by our lvy map .then , in fig .[ fig6 ] we plot as a function of for the 1d anderson model with lvy - type disorder characterized by , 1/2 , 1 , and 3/2 .we have computed the dimensionless conductance by the use of the effective hamiltonian approach ( see details in the appendix ) . in fig .[ fig6 ] we are using to normalize to be able to show curves corresponding to different values of into the same figure panel . also , in fig .[ fig6 ] we are including fittings of the curves vs. with eq .( [ g ] ) , see red dashed lines , which certainly show the _ anomalous _ conductance behavior predicted in refs . . therefore ,validating in this way the use of the lvy map as a pseudo - random number generator . as a function of for the 1d anderson model with off - diagonal lvy - type disorder characterized by .we used an incoming wave with energy .the dashed lines are fittings of the data with eq .( [ g ] ) .each symbol was calculated using wire realizations .each wire realization is constructed from a single sequence of lengths generated by map having random initial conditions uniformly distributed in the intervals and .,width=226 ]in this paper we have introduced the so - called _ lvy map _ : a two - dimensional nonlinear map characterized by tunable lvy flights .indeed it is described by a 2d nonlinear and area preserving map with a control parameter driving two important transitions : ( i ) integrability ( ) to non - integrability ( ) and ; ( ii ) local chaos with to globally chaotic dynamics with .we have applied chirikov s overlapping resonance criteria to identify the onset of global chaos as a function of the parameters of the map therefore reaching to condition ( ii ) as described on the line above . in this way we stated the requirements under which the lvy map could be used as a lvy pseudo - random number generator and confirmed its effectiveness by computing scattering properties of disordered wires .in sect . [ conductance ] we have considered 1d tight - binding chains of size described by the hamiltonian where are on - site potentials that we set to zero and are hopping amplitudes connecting nearest sites . here , .we open the 1d chains by attaching two single - mode semi - infinite leads to the opposite sites on the 1d samples .each lead is described by the 1d semi - infinite tight - binding hamiltonian then , following the _ effective hamiltonian approach _, the scattering matrix ( -matrix ) has the form where , , , and are transmission and reflection amplitudes ; is the unit matrix , is the wave vector supported in the leads , and is an effective energy - dependent non - hermitian hamiltonian given by above , is a matrix with elements ^{1/2}(\delta_{1,1}+\delta_{l,2})$ ] .* acknowledgments . *j.a.m .- b is grateful to fapesp ( 2013/14655 - 9 ) brazilian agency ; partial support form viep - buap grant mebj - exc14-i and fondo institucional pifca 2013 ( buap - ca-169 ) is also acknowledged .b also thanks the warm hospitality at departamento de fsica at unesp rio claro , where this work was mostly developed .j.a.o thanks prope / fundunesp / unesp .thanks to fapesp ( 2012/23688 - 5 ) , cnpq , and capes , brazilian agencies .g. a. luna - acosta , k. na , l. e. reichl , and a. krokhin , phys .e * 53 * , 3271 ( 1996 ) ; a. j. martnez - mendoza , j. a. mndez - bermdez , g. a. luna - acosta , and n. atenco - analco , rev . mex. fis . s * 58 * , 6 ( 2012 ) . c. mahaux and h. a weidenmller , _ shell model approach in nuclear reactions _ , ( north - holland , amsterdam,1969 ) ; j. j. m. verbaarschot , h. a. weidenmller , and m. r. zirnbauer , phys . rep . * 129 * , 367 ( 1985 ) ; i. rotter , rep .. phys . * 54 * , 635 ( 1991 ) . | once recognizing that point particles moving inside the extended version of the rippled billiard perform lvy flights characterized by a lvy - type distribution with , we derive a generalized two - dimensional non - linear map able to produce lvy flights described by with . due to this property , we name as the _ lvy map_. then , by applying chirikov s overlapping resonance criteria we are able to identify the onset of global chaos as a function of the parameters of the map . with this , we state the conditions under which the lvy map could be used as a lvy pseudo - random number generator and , furthermore , confirm its applicability by computing scattering properties of disordered wires . |
we investigate the shrinkage properties of the partial least squares ( pls ) regression estimator .it is known ( e.g. ) that we can express the pls estimator obtained after steps in the following way : where is the component of the ordinary least squares ( ols ) estimator along the principal component of the covariance matrix and is the corresponding eigenvalue .the quantities are called shrinkage factors . we show that these factors are determined by a tridiagonal matrix ( which depends on the input output matrix ) and can be calculated in a recursive way .combining the results of and , we give a simpler and clearer proof of the shape of the shrinkage factors of pls and derive some of their properties .in particular , we show that some of the values are greater than ( this was first proved in ) .+ we argue that these `` peculiar shrinkage properties '' do not necessarily imply that the mean squared error ( mse ) of the pls estimator is worse compared to the mse of the ols estimator : in the case of deterministic shrinkage factors , i.e. factors that do not depend on the output , any value is of course undesirable .but in the case of pls , the shrinkage factors are stochastic they also depend on . even if latexmath:[ ] .theorem [ ci2 ] holds independently of assumption ( [ ass ] ) . by definition , for here , so an application of theorem 10.4.1 in gives the desired result .[ distinct ] if is unreduced , the eigenvalues of and the eigenvalues of are distinct .suppose the two matrices have a common eigenvalue .it follows from ( [ charpol ] ) and the fact that is unreduced that is an eigenvalue of . repeating this, we deduce that is an eigenvalue of , a contradiction , as [ not1 ] in general it is not true that and a submatrix have distinct eigenvalues .consider the case where for all .using equation ( [ charpol ] ) we conclude that is an eigenvalue for all submatrices with odd .if , we have . is positive semidefinite , hence all eigenvalues of are . in other words , if and only if its smallest eigenvalue is . using theorem [ ci2 ] we have as , the matrix is unreduced , which implies that and have no common eigenvalues ( see [ distinct ] ) .we can therefore replace the first by , i.e. the smallest eigenvalue of is . in general, it is not true that .an easy example is we have i.e. .on the other hand it is well known that the matrices are closely related to the so - called rayleigh - ritz procedure , a method that is used to approximate eigenvalues .for details consult e.g. .we have presented two estimators for the regression parameter ols and pls which also define estimators for via one possibility to evaluate the quality of an estimator is to determine its mean squared error ( mse ) . in general , the mse of an estimator for a vector - valued parameter is defined as \\ & = & e\left [ \left(\hat\theta -\theta\right)^t\left(\hat \theta -\theta\right ) \right]\\ & = & \left(e\left [ \hat \theta \right ] -\theta \right)^t\left(e\left [ \hat \theta \right ] -\theta \right)+ e\left [ \left(\hat \theta^t - e\left[\hat\theta\right]\right)^t \left(\hat \theta^t - e\left[\hat\theta\right]\right ) \right]\,.\end{aligned}\ ] ] this is the well - known bias - variance decomposition of the mse . the first part is the squared bias and the second part is the variance term .we start by investigating the class of linear estimators , i.e. estimators that are of the form for some matrix that does not depend on .the ols estimators are linear : is the projection onto the space that is spanned by the columns of .+ recall the regression model ( [ linreg ] ) .let be a linear estimator .we have &= & sx\beta \\ \text{var}\left[\hat \theta \right]&=&\sigma^2 \text{tr}\left(ss^t\right)\,.\end{aligned}\ ] ] the estimator is unbiased as & = & s_2 x\beta \\ & = & p_{l(x ) } x\beta\\ & = & x\beta\,.\end{aligned}\ ] ] the estimator is only unbiased if : &=&e\left[\left(x^tx\right)^- x^t y\right]\\ & = & \left(x^tx\right)^- x^t e\left[y\right]\\ & = & \left(x^tx\right)^- x^t x\beta \\ & = & \beta\,.\end{aligned}\ ] ] let us now have a closer look at the variance term .for we have hence next note that is the operator that projects on the space spanned by the columns of .it follows that and that we conclude that the mse of the estimator depends on the eigenvalues of .small eigenvalues of correspond to directions in that have very low variance .equation ( [ varbeta ] ) shows that if some eigenvalues are small , the variance of is very high , which leads to a high mse .+ one possibility to ( hopefully ) decrease the mse is to modify the ols estimator by shrinking the directions of the ols estimator that are responsible for a high variance .this of course introduces bias .we shrink the ols estimator in the hope that the increase in bias is small compared to the decrease in variance .+ in general , a shrinkage estimator for is of the form where is some real - valued function .the values are called shrinkage factors. examples are * principal component regression and * ridge regression where is the ridge parameter .we will see in section [ shrinkpls ] that pls is a shrinkage estimator as well .it will turn out that the shrinkage behavior of pls regression is rather complicated .+ let us investigate in which way the mse of the estimator is influenced by the shrinkage factors .if the shrinkage estimators are linear , i.e. the shrinkage factors do not depend on , this is an easy task .let us first write the shrinkage estimator in matrix notation .we have the diagonal matrix has entries .the shrinkage estimator for is we calculate the variance of these estimators . and next , we calculate the bias of the two shrinkage estimators . we have &= & s_{shr,1 } x\beta\\ & = & u\sigma d_{shr } \sigma^-u^t \beta \,.\end{aligned}\ ] ]it follows that -\beta \right)^t\left(e\left[s_{shr,1 } y\right ] -\beta \right)\\ & = & \left(u^t \beta\right)^t \left(\sigma d_f \sigma^- -\text{id}\right)^t \left(\sigma d_f \sigma^--\text{id}\right)\left(u^t \beta\right)\\ & = & \sum_{i=1 } ^{p^ * } \left(f(\lambda_i ) -1\right)^2 \left(u_i ^t \beta \right)^2\,.\end{aligned}\ ] ] replacing by it is as easy to show that for the shrinkge estimator and defined above we have if the shrinkage factors are deterministic , i.e. they do not depend on , any value increases the bias . values decrease the variance , whereas values increase the variance .hence an absolute value is always undesirable .the situation is completely different for stochastic shrinkage factors .we will discuss this in the following section .+ note that there is a different notion of shrinkage , namely that the - norm of an estimator is smaller than the -norm of the ols estimator . why is this a desirable property ?let us again consider the case of linear estimators .set for .we have the property that for all is equivalent to the condition that is negative semidefinite .the trace of negative semidefinite matrices is .furthermore , so we conclude that it is known ( see ) that this section , we give a simpler and clearer proof of the shape of the shrinkage factors of pls . basically , we combine the results of and .it turns out that some of the factors are greater than 1 .we try to explain why these `` peculiar shrinkage properties '' do not necessarily imply that the mse of the pls estimator is increased .+ denote by the polynomial associated to that was defined in proposition [ penrose ] , i.e. recall that the eigenvalues of are denoted by .it follows that by definition of pls , hence there is a polynomial of degree with .suppose that .we have by proposition [ penrose ] , we plug this into equation ( [ bhat ] ) and obtain recall that the columns of form an orthonormal basis of .it follows that is the operator that projects on the space .in particular for .this implies that suppose that .if we denote by the component of along the eigenvector of then where is the polynomial defined in ( [ tm ] ) .( ) this follows immediately from the proposition above .we have we now show that some of the shrinkage factors of pls are . for each , we can decompose the interval ] for . by definition ,hence is non - negative on the intervals if is odd and is non - positive on the intervals if is even .it follows from theorem [ ci2 ] that all interval contain at least one eigenvalue of .in general it is not true that for all and . using the example in remark [ not1 ] and the fact that is equivalent to the condition that is an eigenvalue of , it is easy to construct a counterexample . using some of the results of section [ sectiontri ], we can however deduce that some factors are indeed .as all eigenvalues of and are distinct ( c.f . proposition [ distinct ] ) , we see that for all . in particular more generally , using proposition [ distinct ] , we conclude that and is not possible . in practice i.e. calculated on a data set the factors seem to be all of the time .+ furthermore to proove this , we set .we have by definition .furthermore , the smallest positive zero of is and it follows from theorem [ ci2 ] and proposition [ distinct ] that .hence ,1]$ ] .+ using theorem [ ci2 ] , more precisely it is possible to bound the terms from this we can derive bounds on the shrinkage factors . we will not pursue this further , readers who are interested in the bounds should consult .instead , we have a closer look at the mse of the pls estimator .+ in section [ shrinkage ] we showed that a value is not desirable , as the variance of the estimator increases .note however , that in the case of pls , the factors are stochastic ; they depend on - in a nonlinear way .for we have the following situation : if we set and , we have to compare note that the rhs is not necessarily smaller than the lhs , even if . an easy counterexample is the lhs is .+ among others , proposed to bound the shrinkage factors of the pls estimator in the following way . set and define a new estimator : if the shrinkage factors are numbers , this will improve the mse ( cf . section [ shrinkage ] ) .but in the case of stochastic shrinkage factors , the situation is completely unclear .consider again the example .set in this case so it is not clear whether the modified estimator bound leads to a lower mse , which was conjectured in e.g. .+ the above example ( involving and ) is of course purely artificial .it is not clear whether the shrinkage factors behave this way .it is hard if not infeasable to derive statistical properties of the pls estimator or its shrinkage factors , as they depend on in a complicated , nonlinear way . as an alternative, we compare the two different estimators on different data .in this section , we explore the difference between the methods pls and bound .we investigate three artificial datasets and one real world example . in all examples ,we rescale and to have zero mean and unit variance .+ let us start with the artificial datasets .of course , artificial datasets do not reflect many real world situations , but we have the advantage that we know the true regression coefficient and that we have an unlimited amount of examples at hand .we can estimate the mse of any of the four estimators : for we generate a sample and calculate the estimator .we define for all examples , we choose . in our first examplewe generate examples in the following way : the input data is the realistion of a dimensional normally distributed variable with expectation and covariance matrix defined as the regression coefficient is the random permutation of with . + nextwe determine the variance of the error term .we do this by considering several signal - to - noise - ratios ( stnr ) . this quantity is defined as we set and determine the corresponding value of .we generate samples and calculate the four estimators .+ the following figures show the estimated mse for and respectively .the solid lines with the s correspond to pls .the lines with the s correspond to bound .+ we see that bound is better in all cases , although the improvement is not dramatic .we should remark that both method pick the same ( optimal ) number of steps most of the times .the difference between the two methods is especially tiny ( but non - zero ) in the first step .we do not have an explanation for this phenomenon .the mse is the same for the last step as in this case in this example , we generate examples .the input data is the realisation of a dimensional random variable with distribution .the covariance matrix is defined as in the first example ( with replaced by ) .again , the coefficients of are a random permutation with .we consider the signal - to - noise - ratios .+ the results are qualitatively the same as those from the first example .bound is better all of the times , the optimal number of steps are the same for both methods .the input data is generated as in the second example , in particular , we have .this time , we only generate examples .the coefficients of the regression vector are realizations of a distibuted random variable .we investigate the signal - to - noise - ratios .as we have more variables than examples , we do not investigate estimators for : different vectors can lead to , so it does not make sense to determine the bias of an estimator for .instead , we only show the figures for and .again , the estimated mse of bound is lower than the estimated mse of pls .this example is taken from .a survey investigated the degree of job satisfaction of the employees of a company .the employees filled in a questionnaire that consisted of questions regarding their work environment and one question ( the response variable ) regarding the degree to which they are satisfied with their job .the answers of the employees were summerized for each of the departments of the company .+ we compare the two methods pls and bound on this data set . for each determine the 10fold crossvalidation error .[ cv ] the method bound is slightly better than pls on this data set : the cv error for the optimal number of components ( which is ) is 0.2698 for bound and 0.2747 for pls .it is remarkable that in this example the cv error of bound exceeds the cv error of pls in some cases .it is not clear if this is due to the small number of examples ( which makes the estimation unprecise ) or if this can also happen `` in theory '' .this paper consists of two parts . in the first part , we gave alternative and hopefully clearer proofs of the shrinkage factors of pls .in particular , we derived the fact that some of the shrinakge factors are .we explained in detail that this would lead to an unnecessarily high mse if pls was a linear estimator .this is however not the case and we emphasized that bounding the absolute value of the shrinkage factors by does not automatically lead to a lower mse .+ in the second part , we investigated the problem numerically .experiments on simulated and real world data showed that it might be better to adjust the shrinkage factors so that their absolute value is - a method that we called bound .the difference between bound and pls was not dramatic however . besides , the scale of the experiments was of course way too small, so it would be light - headed if we concluded that we should always use bound instead of pls .+ nevertheless , the experiments show that it is worth exploring the method bound in more detail .one drawback of this method is that we have to adjust the shrinkage factors `` by hand '' .if bounding the shrinkage factors tends to lead to better results , we might modify the original optimization problem of pls such that the shrinkage factors of the solution are bounded .we might modfify and to obtain a different krylov space or replace by a different set of feasible solutions .i would like to thank ulrich kockelkorn who eliminated innumerable errors from earlier versions of this paper and who gave a lot of helpful remarks .i would also like to thank jrg betzin for our extensive discussions on pls . | we present a formula for the shrinkage factors of the partial least squares regression estimator and deduce some of their properties , in particular the known fact that some of the factors are . we investigate the effect of shrinkage factors for the mean squared error of linear estimators and illustrate that we can not extend the results to nonlinear estimators . in particular , shrinkage factors do not automatically lead to a poorer mean squared error . we investigate empirically the effect of bounding the the absolute value of the partial least squares shrinkage factors by . |
as of the durham conference , the problem of obtaining a goodness of fit in unbinned likelihood fits was an unsolved one . in what follows , we will denote by the vector , the theoretical parameters ( for `` signal '' ) and the vector , the experimentally measured quantities or `` configurations '' . for simplicity, we will illustrate the method where both and are one dimensional , though either or both can be multi - dimensional in practice .we thus define the theoretical model by the conditional probability density . then an unbinned maximum likelihood fit to data is obtained by maximizing the likelihood , where the likelihood is evaluated at the observed data points .such a fit will determine the maximum likelihood value of the theoretical parameters , but will not tell us how good the fit is .the value of the likelihood at the maximum likelihood point does not furnish a goodness of fit , since the likelihood is not invariant under change of variable .this can be seen by observing that one can transform the variable set to a variable set such that is uniformly distributed between 0 and 1 .such a transformation is known as a hypercube transformation , in multi - dimensions .other datasets will yield different values of likelihood in the variable space when the likelihood is computed with the original function .however , in the original hypercube space , the value of the likelihood is unity regardless of the dataset , thus the likelihood can not furnish a goodness of fit by itself , since neither the likelihood , nor ratios of likelihoods computed using the same distribution is invariant under variable transformations .the fundamental reason for this non - invariance is that only a single distribution , namely , is being used to compute the goodness of fit .in binned likelihood cases , where one is comparing a theoretical distribution with a binned histogram , there are two distributions involved , the theoretical distribution and the data distribution .the of the data is approximated by the bin contents of the histogram normalized to unity .if the data consists of events , the of the data is defined in the frequentist sense as the normalized density distribution in space of events as . in the binned case, we can bin in finer and finer bins as and obtain a smooth function , which we define as the of the data . in practice, one is always limited by statistics and the binned function will be an approximation to the true .we can now define a likelihood ratio such that where we have used the notation to denote the event set .let us now note that is invariant under the variable transformation , since and the jacobian of the transformation cancels in the numerator and denominator in the ratio .this is an extremely important property of the likelihood ratio that qualifies it to be a goodness of fit variable .since the denominator is independent of the theoretical parameters , both the likelihood ratio and the likelihood maximize at the same point .one can also show that the maximum value of the likelihood ratio occurs when the theoretical likelihood and the data likelihood are equal for all .in the case where the is estimated by binned histograms and the statistics are gaussian , it is readily shown that the commonly used goodness of fit variable .it is worth emphasizing that the likelihood ratio as defined above is needed and not just the negative log of theoretical likelihood to derive this result .the popular conception that is -2 log is simply incorrect!. it can also be shown that the likelihood ratio defined above can describe the binned cases where the statistics are poissonian . in order to solve our problem of goodness of fit in unbinned likelihood cases, one needs to arrive at a method of estimating the data without the use of bins .one of the better known methods of estimating the probability density of a distribution in an unbinned case is by the use of probability density estimators , also known as kernel density estimators .the is approximated by where a kernel function is centered around each data point , is so defined that it normalizes to unity and for large approaches a dirac delta function .the choice of the kernel function can vary depending on the problem .a popular kernel is the gaussian defined in the multi - dimensional case as where is the error matrix of the data defined as and the implies average over the events , and is the number of dimensions .the hessian matrix is defined as the inverse of and the repeated indices imply summing over .the parameter is a `` smoothing parameter '' , which has a suggested optimal value , that satisfies the asymptotic condition the parameter will depend on the local number density and will have to be adjusted as a function of the local density to obtain good representation of the data by the .our proposal for the goodness of fit in unbinned likelihood fits is thus the likelihood ratio evaluated at the maximum likelihood point .we consider a simple one - dimensional case where the data is an exponential distribution , say decay times of a radioactive isotope .the theoretical prediction is given by we have chosen an exponential with for this example . the gaussian kernel for the be given by where the variance of the exponential is numerically equal to . to begin with, we chose a constant value for the smoothing parameter , which for 1000 events generated is calculated to be 0.125 . figure [ genev ] shows the generated events , the theoretical curve and the curve normalized to the number of events .the fails to reproduce the data near the origin due to the boundary effect , whereby the gaussian probabilities for events close to the origin spill over to negative values of .this lost probability would be compensated by events on the exponential distribution with negative if they existed . in our case , this presents a drawback for the method , which we will remedy later in the paper using definitions on the hypercube and periodic boundary conditions . for the time being , we will confine our example to values of to avoid the boundary effect . in order to test the goodness of fit capabilities of the likelihood ratio , we superimpose a gaussian on the exponential and try and fit the data by a simple exponential . and the estimator ( solid ) histogram with no errors . [ genev],width=245 ]figure [ genev1 ] shows the `` data '' with 1000 events generated as an exponential in the fiducial range .superimposed on it is a gaussian of 500 events .more events in the exponential are generated in the interval to avoid the boundary effect at the fiducial boundary at c=1.0 .since the number density varies significantly , we have had to introduce a method of iteratively determining the smoothing factor as a function of as described in . with this modification in the ,one gets a good description of the behavior of the data by the as shown in figure [ genev1 ]. generated as an exponential with decay constant =1.0 with a superimposed gaussian of 500 events centered at =2.0 and width=0.2 .the estimator is the ( solid ) histogram with no errors .[ genev1],width=245 ] we now vary the number of events in the gaussian and obtain the value of the negative log likelihood ratio as a function of the strength of the gaussian .table [ tab1 ] summarizes the results .the number of standard deviations the unbinned likelihood fit is from what is expected is determined empirically by plotting the value of for a large number of fits where no gaussian is superimposed ( i.e. the null hypothesis ) and determining the mean and of this distribution and using these to estimate the number of s the observed is from the null case .table [ tab1 ] also gives the results of a binned fit on the same `` data '' .it can be seen that the unbinned fit gives a discrimination when the number of gaussian events is 85 , where as the binned fit gives a of 42/39 for the same case .we intend to make these tests more sophisticated in future work . .goodness of fit results from unbinned likelihood and binned likelihood fits for various data samples .the negative values for the number of standard deviations in some of the examples is due to statistical fluctuation .[ tab1 ] [ cols="^,^,^,^ " , ]equation [ pns ] can be used to show that the expectation value of of the parameter is given by where is the average of for individual experiments .equation [ sbar ] states is the weighted average of obtained from individual measurements , the weight for each experiment being the `` data likelihood '' for that experiment . in the absence of experimental bias , would be identical to the true value .it remains to be shown that the weighted average of maximum likelihood values from indiviual experiments also converge to the maximum likelihood point of .also one needs to develop an analytic theory of the goodness of fit for unbinned likelihood fits .finally , one needs to investigate a bit more closely the transformation properties of under change of variable .to conclude , we have proposed a scheme for obtaining the goodness of fit in unbinned likelihood fits .this scheme involves the usage of two s , namely data and theory . in the process of computing the fitted errors , we have demonstrated that the quantity in the joint probability equations that has been interpreted as the `` bayesian prior '' is in reality a number and not a distribution .this number is the value of the of the parameter , which we call the `` unknown concomitant '' at the true value of the parameter .this number is calculated from a combination of data and theory and is seen to be an irrelevant parameter .if this viewpoint is accepted , the controversial practice of guessing distributions for the `` bayesian prior '' can now be abandoned , as can be the terms `` bayesian '' and `` frequentist '' .we show how to use the posterior density to rigorously calculate fitted errors .9 k. kinoshita , `` evaluating quality of fit in unbinned maximum likelihood fitting '' , proceedings of the conference on advanced statistical techniques in particle physics , durham , march 2002 ippp/02/39 , dcpt/02/78 .+ b. yabsley,``statistical practice at the belle experiment , and some questions'',ibid .r. d. cousins,``conference summary '' , ibid .r. a. fisher,``on the mathematical foundations of theoretical statistics '' , _ philos .london ser .a _ * 222 * , 309 - 368(1922 ) ; + r. a. fisher,``theory of statistical estimation '' , _ proc . cambridge philos .* 22 * , 700 - 725 ( 1925 ) . ``a measure of the goodness of fit in unbinned likelihood fits '' , r .raja , long write - up , + http://www-conf.slac.stanford.edu/phystat2003/talks + /raja/ raja_bayes_maxlike.pdf `` end of bayesianism ? '' , r.raja , http://www-conf.slac.stanford.edu/phystat2003/talks/raja/raja-end_bayesianism.pdf e. parzen , `` on estimation of a probability density function and mode '' _ ann.math.statis . _ * 32 * , 1065 - 1072 ( 1962 ) .d. scott ._ multivariate density estimation_. john wiley & sons , 1992 .+ m. wand and m. jones , _ kernel smoothing_. chapman & hall , 1995 . | maximum likelihood fits to data can be done using binned data ( histograms ) and unbinned data . with binned data , one gets not only the fitted parameters but also a measure of the goodness of fit . with unbinned data , currently , the fitted parameters are obtained but no measure of goodness of fit is available . this remains , to date , an unsolved problem in statistics . using bayes theorem and likelihood ratios , we provide a method by which both the fitted quantities and a measure of the goodness of fit are obtained for unbinned likelihood fits , as well as errors in the fitted quantities . the quantity , conventionally interpreted as a bayesian prior , is seen in this scheme to be a number not a distribution , that is determined from data . |
Subsets and Splits